Dec 01 14:45:48 crc systemd[1]: Starting Kubernetes Kubelet... Dec 01 14:45:48 crc restorecon[4589]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:48 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 14:45:49 crc restorecon[4589]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 01 14:45:49 crc kubenswrapper[4637]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 14:45:49 crc kubenswrapper[4637]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 01 14:45:49 crc kubenswrapper[4637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 14:45:49 crc kubenswrapper[4637]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 14:45:49 crc kubenswrapper[4637]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 01 14:45:49 crc kubenswrapper[4637]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.590794 4637 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593900 4637 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593922 4637 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593927 4637 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593949 4637 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593954 4637 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593959 4637 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593963 4637 feature_gate.go:330] unrecognized feature gate: Example Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593967 4637 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593972 4637 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593977 4637 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593982 4637 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593988 4637 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593993 4637 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.593999 4637 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594004 4637 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594010 4637 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594014 4637 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594018 4637 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594022 4637 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594027 4637 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594031 4637 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594035 4637 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594039 4637 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594043 4637 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594047 4637 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594051 4637 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594064 4637 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594069 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594073 4637 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594077 4637 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594081 4637 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594085 4637 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594091 4637 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594095 4637 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594100 4637 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594104 4637 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594109 4637 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594113 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594118 4637 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594123 4637 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594127 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594133 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594139 4637 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594148 4637 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594154 4637 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594160 4637 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594165 4637 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594170 4637 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594175 4637 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594180 4637 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594184 4637 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594188 4637 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594191 4637 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594196 4637 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594201 4637 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594205 4637 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594209 4637 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594213 4637 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594217 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594222 4637 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594225 4637 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594229 4637 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594234 4637 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594239 4637 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594243 4637 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594247 4637 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594251 4637 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594255 4637 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594259 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594262 4637 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.594266 4637 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594470 4637 flags.go:64] FLAG: --address="0.0.0.0" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594484 4637 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594492 4637 flags.go:64] FLAG: --anonymous-auth="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594498 4637 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594504 4637 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594508 4637 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594514 4637 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594520 4637 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594524 4637 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594554 4637 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594560 4637 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594564 4637 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594568 4637 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594573 4637 flags.go:64] FLAG: --cgroup-root="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594577 4637 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594581 4637 flags.go:64] FLAG: --client-ca-file="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594585 4637 flags.go:64] FLAG: --cloud-config="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594589 4637 flags.go:64] FLAG: --cloud-provider="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594593 4637 flags.go:64] FLAG: --cluster-dns="[]" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594599 4637 flags.go:64] FLAG: --cluster-domain="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594603 4637 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594608 4637 flags.go:64] FLAG: --config-dir="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594612 4637 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594618 4637 flags.go:64] FLAG: --container-log-max-files="5" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594624 4637 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594629 4637 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594633 4637 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594638 4637 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594644 4637 flags.go:64] FLAG: --contention-profiling="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594649 4637 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594653 4637 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594658 4637 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594662 4637 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594667 4637 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594672 4637 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594676 4637 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594680 4637 flags.go:64] FLAG: --enable-load-reader="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594684 4637 flags.go:64] FLAG: --enable-server="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594690 4637 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594697 4637 flags.go:64] FLAG: --event-burst="100" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594701 4637 flags.go:64] FLAG: --event-qps="50" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594705 4637 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594709 4637 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594713 4637 flags.go:64] FLAG: --eviction-hard="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594718 4637 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594722 4637 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594726 4637 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594731 4637 flags.go:64] FLAG: --eviction-soft="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594735 4637 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594739 4637 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594743 4637 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594747 4637 flags.go:64] FLAG: --experimental-mounter-path="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594751 4637 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594755 4637 flags.go:64] FLAG: --fail-swap-on="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594759 4637 flags.go:64] FLAG: --feature-gates="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594764 4637 flags.go:64] FLAG: --file-check-frequency="20s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594768 4637 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594773 4637 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594777 4637 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594781 4637 flags.go:64] FLAG: --healthz-port="10248" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594785 4637 flags.go:64] FLAG: --help="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594790 4637 flags.go:64] FLAG: --hostname-override="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594794 4637 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594798 4637 flags.go:64] FLAG: --http-check-frequency="20s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594802 4637 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594806 4637 flags.go:64] FLAG: --image-credential-provider-config="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594810 4637 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594815 4637 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594819 4637 flags.go:64] FLAG: --image-service-endpoint="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594822 4637 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594828 4637 flags.go:64] FLAG: --kube-api-burst="100" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594832 4637 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594837 4637 flags.go:64] FLAG: --kube-api-qps="50" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594843 4637 flags.go:64] FLAG: --kube-reserved="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594848 4637 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594853 4637 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594859 4637 flags.go:64] FLAG: --kubelet-cgroups="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594864 4637 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594869 4637 flags.go:64] FLAG: --lock-file="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594874 4637 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594884 4637 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594889 4637 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594895 4637 flags.go:64] FLAG: --log-json-split-stream="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594900 4637 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594905 4637 flags.go:64] FLAG: --log-text-split-stream="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594909 4637 flags.go:64] FLAG: --logging-format="text" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594914 4637 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594919 4637 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594923 4637 flags.go:64] FLAG: --manifest-url="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594928 4637 flags.go:64] FLAG: --manifest-url-header="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594950 4637 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594955 4637 flags.go:64] FLAG: --max-open-files="1000000" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594961 4637 flags.go:64] FLAG: --max-pods="110" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594965 4637 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594970 4637 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594975 4637 flags.go:64] FLAG: --memory-manager-policy="None" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594979 4637 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594984 4637 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594989 4637 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.594994 4637 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595004 4637 flags.go:64] FLAG: --node-status-max-images="50" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595009 4637 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595015 4637 flags.go:64] FLAG: --oom-score-adj="-999" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595019 4637 flags.go:64] FLAG: --pod-cidr="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595024 4637 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595032 4637 flags.go:64] FLAG: --pod-manifest-path="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595036 4637 flags.go:64] FLAG: --pod-max-pids="-1" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595041 4637 flags.go:64] FLAG: --pods-per-core="0" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595045 4637 flags.go:64] FLAG: --port="10250" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595051 4637 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595055 4637 flags.go:64] FLAG: --provider-id="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595060 4637 flags.go:64] FLAG: --qos-reserved="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595064 4637 flags.go:64] FLAG: --read-only-port="10255" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595069 4637 flags.go:64] FLAG: --register-node="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595074 4637 flags.go:64] FLAG: --register-schedulable="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595078 4637 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595086 4637 flags.go:64] FLAG: --registry-burst="10" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595090 4637 flags.go:64] FLAG: --registry-qps="5" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595094 4637 flags.go:64] FLAG: --reserved-cpus="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595098 4637 flags.go:64] FLAG: --reserved-memory="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595104 4637 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595108 4637 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595112 4637 flags.go:64] FLAG: --rotate-certificates="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595116 4637 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595120 4637 flags.go:64] FLAG: --runonce="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595124 4637 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595129 4637 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595133 4637 flags.go:64] FLAG: --seccomp-default="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595137 4637 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595142 4637 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595146 4637 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595150 4637 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595155 4637 flags.go:64] FLAG: --storage-driver-password="root" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595159 4637 flags.go:64] FLAG: --storage-driver-secure="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595163 4637 flags.go:64] FLAG: --storage-driver-table="stats" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595168 4637 flags.go:64] FLAG: --storage-driver-user="root" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595172 4637 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595176 4637 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595182 4637 flags.go:64] FLAG: --system-cgroups="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595186 4637 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595194 4637 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595198 4637 flags.go:64] FLAG: --tls-cert-file="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595202 4637 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595208 4637 flags.go:64] FLAG: --tls-min-version="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595212 4637 flags.go:64] FLAG: --tls-private-key-file="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595217 4637 flags.go:64] FLAG: --topology-manager-policy="none" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595221 4637 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595225 4637 flags.go:64] FLAG: --topology-manager-scope="container" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595230 4637 flags.go:64] FLAG: --v="2" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595235 4637 flags.go:64] FLAG: --version="false" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595241 4637 flags.go:64] FLAG: --vmodule="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595246 4637 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595250 4637 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595348 4637 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595353 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595357 4637 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595362 4637 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595366 4637 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595370 4637 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595374 4637 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595380 4637 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595384 4637 feature_gate.go:330] unrecognized feature gate: Example Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595388 4637 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595393 4637 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595397 4637 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595400 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595404 4637 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595408 4637 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595412 4637 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595416 4637 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595419 4637 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595423 4637 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595426 4637 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595430 4637 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595433 4637 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595437 4637 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595440 4637 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595444 4637 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595447 4637 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595451 4637 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595454 4637 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595458 4637 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595462 4637 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595465 4637 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595469 4637 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595472 4637 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595479 4637 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595483 4637 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595486 4637 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595490 4637 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595493 4637 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595497 4637 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595500 4637 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595503 4637 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595507 4637 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595511 4637 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595514 4637 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595518 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595521 4637 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595527 4637 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595531 4637 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595535 4637 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595539 4637 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595543 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595547 4637 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595550 4637 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595555 4637 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595559 4637 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595563 4637 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595567 4637 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595571 4637 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595576 4637 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595580 4637 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595584 4637 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595587 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595592 4637 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595596 4637 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595601 4637 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595607 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595611 4637 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595614 4637 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595618 4637 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595621 4637 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.595625 4637 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.595638 4637 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.606715 4637 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.606784 4637 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.606972 4637 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.606999 4637 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607012 4637 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607021 4637 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607031 4637 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607038 4637 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607046 4637 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607055 4637 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607063 4637 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607070 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607079 4637 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607086 4637 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607095 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607103 4637 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607110 4637 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607118 4637 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607126 4637 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607134 4637 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607142 4637 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607152 4637 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607159 4637 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607170 4637 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607183 4637 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607194 4637 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607206 4637 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607217 4637 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607226 4637 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607234 4637 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607242 4637 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607250 4637 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607258 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607265 4637 feature_gate.go:330] unrecognized feature gate: Example Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607274 4637 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607283 4637 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607292 4637 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607300 4637 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607308 4637 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607316 4637 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607326 4637 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607336 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607347 4637 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607355 4637 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607364 4637 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607373 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607381 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607390 4637 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607398 4637 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607407 4637 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607415 4637 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607424 4637 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607432 4637 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607444 4637 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607455 4637 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607463 4637 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607472 4637 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607480 4637 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607488 4637 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607496 4637 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607503 4637 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607511 4637 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607519 4637 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607527 4637 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607534 4637 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607542 4637 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607549 4637 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607557 4637 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607565 4637 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607573 4637 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607580 4637 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607588 4637 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607596 4637 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.607611 4637 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607869 4637 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607885 4637 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607895 4637 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607904 4637 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607913 4637 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607923 4637 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607957 4637 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607966 4637 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607975 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607985 4637 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.607994 4637 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608002 4637 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608010 4637 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608018 4637 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608026 4637 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608037 4637 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608045 4637 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608052 4637 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608060 4637 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608069 4637 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608077 4637 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608084 4637 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608092 4637 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608100 4637 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608108 4637 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608118 4637 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608127 4637 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608137 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608147 4637 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608157 4637 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608167 4637 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608177 4637 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608187 4637 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608197 4637 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608207 4637 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608217 4637 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608227 4637 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608237 4637 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608246 4637 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608257 4637 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608268 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608279 4637 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608289 4637 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608300 4637 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608311 4637 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608323 4637 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608335 4637 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608347 4637 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608359 4637 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608372 4637 feature_gate.go:330] unrecognized feature gate: Example Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608383 4637 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608394 4637 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608404 4637 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608416 4637 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608426 4637 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608439 4637 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608447 4637 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608455 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608463 4637 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608471 4637 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608479 4637 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608487 4637 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608496 4637 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608505 4637 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608516 4637 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608527 4637 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608537 4637 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608547 4637 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608557 4637 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608567 4637 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.608577 4637 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.608593 4637 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.609001 4637 server.go:940] "Client rotation is on, will bootstrap in background" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.615368 4637 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.615552 4637 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.616676 4637 server.go:997] "Starting client certificate rotation" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.616746 4637 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.616947 4637 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-02 14:56:16.401954646 +0000 UTC Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.617071 4637 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 768h10m26.784886344s for next certificate rotation Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.626356 4637 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.629049 4637 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.638655 4637 log.go:25] "Validated CRI v1 runtime API" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.655778 4637 log.go:25] "Validated CRI v1 image API" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.657274 4637 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.660512 4637 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-01-14-40-22-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.660577 4637 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.684543 4637 manager.go:217] Machine: {Timestamp:2025-12-01 14:45:49.682665905 +0000 UTC m=+0.200374813 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199472640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e36facbc-e27f-4191-ad98-6bea77d7ef5d BootID:d9978c86-16e7-4847-903a-8e83206e0eb1 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039894528 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076107 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599734272 Type:vfs Inodes:3076107 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:93:82:65 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:93:82:65 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:88:d3:f0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:60:d1:1e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f1:e1:43 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a8:e0:96 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:78:19:29:6c:95 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:b4:05:e4:f5:ff Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199472640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.685050 4637 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.685455 4637 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.686112 4637 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.686372 4637 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.686428 4637 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.686756 4637 topology_manager.go:138] "Creating topology manager with none policy" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.686770 4637 container_manager_linux.go:303] "Creating device plugin manager" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.687107 4637 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.687162 4637 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.687621 4637 state_mem.go:36] "Initialized new in-memory state store" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.687746 4637 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.688751 4637 kubelet.go:418] "Attempting to sync node with API server" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.688780 4637 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.688810 4637 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.688830 4637 kubelet.go:324] "Adding apiserver pod source" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.688846 4637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.691482 4637 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.691854 4637 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.692135 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.692336 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.692050 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.692534 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693010 4637 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693626 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693655 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693664 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693673 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693687 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693696 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693705 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693722 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693732 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693743 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693795 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.693805 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.694069 4637 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.694595 4637 server.go:1280] "Started kubelet" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.694945 4637 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.695346 4637 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:49 crc systemd[1]: Started Kubernetes Kubelet. Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.708698 4637 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.709570 4637 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.711149 4637 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d1eab3613112a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 14:45:49.694562602 +0000 UTC m=+0.212271460,LastTimestamp:2025-12-01 14:45:49.694562602 +0000 UTC m=+0.212271460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.717035 4637 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.717096 4637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.718388 4637 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.718350 4637 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.718820 4637 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.718376 4637 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:11:20.217298609 +0000 UTC Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.719132 4637 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 337h25m30.498176433s for next certificate rotation Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.720061 4637 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.720358 4637 factory.go:55] Registering systemd factory Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.720447 4637 factory.go:221] Registration of the systemd container factory successfully Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.721069 4637 server.go:460] "Adding debug handlers to kubelet server" Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.721312 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.721473 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.722059 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.724581 4637 factory.go:153] Registering CRI-O factory Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.724609 4637 factory.go:221] Registration of the crio container factory successfully Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.724688 4637 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.724714 4637 factory.go:103] Registering Raw factory Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.724737 4637 manager.go:1196] Started watching for new ooms in manager Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.729074 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.730345 4637 manager.go:319] Starting recovery of all containers Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731324 4637 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731357 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731393 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731409 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731422 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731432 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731443 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731453 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731466 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731476 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731486 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731496 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731509 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731561 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731581 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731593 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731606 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731619 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731630 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731639 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731653 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731664 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731675 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731684 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731693 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731703 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731716 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731753 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731763 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731774 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731784 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731799 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731811 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731822 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731833 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731844 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731874 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731884 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731894 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731906 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731917 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731972 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731986 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.731997 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732008 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732018 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732029 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732040 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732050 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732060 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732107 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732117 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732140 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732151 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732161 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732174 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732186 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732196 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732206 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732216 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732226 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732237 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732248 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732257 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732270 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732279 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732290 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732300 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732311 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732323 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732332 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732344 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732353 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732365 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732376 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732385 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732394 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732403 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732412 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732423 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732435 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732446 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732455 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732466 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732476 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732487 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732499 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732511 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732521 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732531 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732541 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732553 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732563 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732572 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732582 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732591 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732600 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732610 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732620 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732629 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732639 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732649 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732659 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732671 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732686 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732699 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732710 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732722 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732731 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732741 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732751 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732761 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732771 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732782 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732794 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732804 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732813 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732822 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732857 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732867 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732876 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732885 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732893 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732902 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732911 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732919 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732942 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732954 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732962 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732970 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732979 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732989 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.732998 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733007 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733017 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733029 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733044 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733056 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733068 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733078 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733086 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733095 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733104 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733114 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733124 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733149 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733168 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733184 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733197 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733210 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733220 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733233 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733244 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733255 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733266 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733279 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733292 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733306 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733317 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733331 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733341 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733356 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733365 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733377 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733390 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733400 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733410 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733421 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733431 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733443 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733454 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733465 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733475 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733485 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733495 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733511 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733522 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733532 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733541 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733554 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733564 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733573 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733584 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733605 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733622 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733632 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733643 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733655 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733666 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733677 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733686 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733696 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733706 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733719 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733728 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733739 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733749 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733760 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733770 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733782 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733792 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733806 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733818 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733829 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733840 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733850 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733859 4637 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733867 4637 reconstruct.go:97] "Volume reconstruction finished" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.733874 4637 reconciler.go:26] "Reconciler: start to sync state" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.751774 4637 manager.go:324] Recovery completed Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.761812 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.763671 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.763708 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.763736 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.764821 4637 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.764836 4637 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.764855 4637 state_mem.go:36] "Initialized new in-memory state store" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.768319 4637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.770048 4637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.770084 4637 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.770107 4637 kubelet.go:2335] "Starting kubelet main sync loop" Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.770152 4637 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 01 14:45:49 crc kubenswrapper[4637]: W1201 14:45:49.808711 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.809017 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.820596 4637 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.857691 4637 policy_none.go:49] "None policy: Start" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.860582 4637 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.860622 4637 state_mem.go:35] "Initializing new in-memory state store" Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.870268 4637 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.912955 4637 manager.go:334] "Starting Device Plugin manager" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.913036 4637 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.913052 4637 server.go:79] "Starting device plugin registration server" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.913482 4637 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.913506 4637 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.913755 4637 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.913845 4637 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 01 14:45:49 crc kubenswrapper[4637]: I1201 14:45:49.913859 4637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.921409 4637 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 14:45:49 crc kubenswrapper[4637]: E1201 14:45:49.923385 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.014481 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.016911 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.016965 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.016974 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.017001 4637 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 14:45:50 crc kubenswrapper[4637]: E1201 14:45:50.017424 4637 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.070418 4637 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.070605 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.072088 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.072191 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.072290 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.072703 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.072801 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.073364 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074037 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074136 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074163 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074318 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074386 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074439 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074605 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074732 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.074788 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.076224 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.076267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.076278 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.076454 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.076526 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.076540 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.076775 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.076989 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.077135 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.077977 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.078005 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.078014 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.078720 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.078877 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.078959 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.079104 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.079212 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.079251 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.080502 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.080526 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.080538 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.080774 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.080797 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.080807 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.080977 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.081011 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.082195 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.082221 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.082233 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138343 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138417 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138451 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138489 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138568 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138597 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138621 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138646 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138673 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138698 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138725 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138752 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138781 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138808 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.138833 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.218520 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.219723 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.219771 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.219782 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.219818 4637 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 14:45:50 crc kubenswrapper[4637]: E1201 14:45:50.220459 4637 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.239632 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.239697 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.239717 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.239779 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.239831 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.239865 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.239733 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.239910 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240109 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240145 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240156 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240185 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240199 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240230 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240237 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240257 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240261 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240289 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240312 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240314 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240344 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240366 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240391 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240409 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240441 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240459 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240474 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240501 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240514 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.240531 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: E1201 14:45:50.325260 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.409387 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.414119 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.431700 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: W1201 14:45:50.440203 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-dbd2b02495fa9a8ec230bf2c2a4b9202fe3692d9af4cca998a32f4a03e5bde30 WatchSource:0}: Error finding container dbd2b02495fa9a8ec230bf2c2a4b9202fe3692d9af4cca998a32f4a03e5bde30: Status 404 returned error can't find the container with id dbd2b02495fa9a8ec230bf2c2a4b9202fe3692d9af4cca998a32f4a03e5bde30 Dec 01 14:45:50 crc kubenswrapper[4637]: W1201 14:45:50.443585 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dc9c8a1bc26405b30365680ea75d0ed4c2f921b696d299a2c3965d8cea416f1e WatchSource:0}: Error finding container dc9c8a1bc26405b30365680ea75d0ed4c2f921b696d299a2c3965d8cea416f1e: Status 404 returned error can't find the container with id dc9c8a1bc26405b30365680ea75d0ed4c2f921b696d299a2c3965d8cea416f1e Dec 01 14:45:50 crc kubenswrapper[4637]: W1201 14:45:50.451751 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9271aa79e80e5485c4ba751dd108c9bef6d55d970477cea6cd45c188c0fc4049 WatchSource:0}: Error finding container 9271aa79e80e5485c4ba751dd108c9bef6d55d970477cea6cd45c188c0fc4049: Status 404 returned error can't find the container with id 9271aa79e80e5485c4ba751dd108c9bef6d55d970477cea6cd45c188c0fc4049 Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.455539 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.462391 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 14:45:50 crc kubenswrapper[4637]: W1201 14:45:50.488293 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-72260d4a829f741dbbcd9a1970e78e153356b0f9508d80dcdb0721a0b17e27b2 WatchSource:0}: Error finding container 72260d4a829f741dbbcd9a1970e78e153356b0f9508d80dcdb0721a0b17e27b2: Status 404 returned error can't find the container with id 72260d4a829f741dbbcd9a1970e78e153356b0f9508d80dcdb0721a0b17e27b2 Dec 01 14:45:50 crc kubenswrapper[4637]: W1201 14:45:50.491089 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2c4392bec168b111fb035ac548a53cd8755f38fc38cdd3d1f1aa76aef0810360 WatchSource:0}: Error finding container 2c4392bec168b111fb035ac548a53cd8755f38fc38cdd3d1f1aa76aef0810360: Status 404 returned error can't find the container with id 2c4392bec168b111fb035ac548a53cd8755f38fc38cdd3d1f1aa76aef0810360 Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.621642 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.625412 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.625464 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.625476 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.625514 4637 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 14:45:50 crc kubenswrapper[4637]: E1201 14:45:50.626156 4637 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.696894 4637 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.774267 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9271aa79e80e5485c4ba751dd108c9bef6d55d970477cea6cd45c188c0fc4049"} Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.775274 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbd2b02495fa9a8ec230bf2c2a4b9202fe3692d9af4cca998a32f4a03e5bde30"} Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.778194 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dc9c8a1bc26405b30365680ea75d0ed4c2f921b696d299a2c3965d8cea416f1e"} Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.779403 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2c4392bec168b111fb035ac548a53cd8755f38fc38cdd3d1f1aa76aef0810360"} Dec 01 14:45:50 crc kubenswrapper[4637]: I1201 14:45:50.780285 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"72260d4a829f741dbbcd9a1970e78e153356b0f9508d80dcdb0721a0b17e27b2"} Dec 01 14:45:50 crc kubenswrapper[4637]: W1201 14:45:50.801321 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:50 crc kubenswrapper[4637]: E1201 14:45:50.801406 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 01 14:45:51 crc kubenswrapper[4637]: W1201 14:45:51.014848 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:51 crc kubenswrapper[4637]: E1201 14:45:51.014912 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 01 14:45:51 crc kubenswrapper[4637]: W1201 14:45:51.104673 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:51 crc kubenswrapper[4637]: E1201 14:45:51.104748 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 01 14:45:51 crc kubenswrapper[4637]: E1201 14:45:51.126622 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Dec 01 14:45:51 crc kubenswrapper[4637]: W1201 14:45:51.191461 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:51 crc kubenswrapper[4637]: E1201 14:45:51.191531 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.427132 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.428654 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.428719 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.428734 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.428778 4637 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 14:45:51 crc kubenswrapper[4637]: E1201 14:45:51.429530 4637 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.696129 4637 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.783973 4637 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6c3ad0090ab80cedcfddadaa0042868e4a96fdb3868f181d94ca8807b8eb5316" exitCode=0 Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.784050 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6c3ad0090ab80cedcfddadaa0042868e4a96fdb3868f181d94ca8807b8eb5316"} Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.784167 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.785345 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.785377 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.785385 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.785435 4637 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc" exitCode=0 Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.785482 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc"} Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.785531 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.786517 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.786552 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.786564 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.787037 4637 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e" exitCode=0 Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.787079 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e"} Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.787173 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.787829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.787854 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.787865 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.790999 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa"} Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.791038 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109"} Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.791051 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058"} Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.791062 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245"} Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.791020 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.791779 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.791807 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.791818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.793015 4637 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5" exitCode=0 Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.793045 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5"} Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.793098 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.793673 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.793703 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.793712 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.794889 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.795459 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.795490 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:51 crc kubenswrapper[4637]: I1201 14:45:51.795510 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.798252 4637 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8319215497d984f0c98a6a085e6dd833460309f42973591f347f4e2bd3e78afd" exitCode=0 Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.798406 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.798444 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8319215497d984f0c98a6a085e6dd833460309f42973591f347f4e2bd3e78afd"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.799807 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.799841 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.799851 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.801896 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"226ba1817928934b4987c768dc91c2b010d39f84323f2d84555a9e3418e1563b"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.801921 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.806200 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.806322 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.806437 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.811616 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.811834 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.811965 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.811876 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.814966 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.815009 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.815021 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.827028 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.827073 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.827087 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.827100 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.827114 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169"} Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.827158 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.827390 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.828327 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.828371 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.828386 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.830306 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.830372 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:52 crc kubenswrapper[4637]: I1201 14:45:52.830449 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.029716 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.031497 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.031538 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.031549 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.031577 4637 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.171503 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.240018 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.831830 4637 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="52bb076a09c6fe9618614e8b7133cf42dc32a6efe734538f5d56b1114d213cfc" exitCode=0 Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.831893 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"52bb076a09c6fe9618614e8b7133cf42dc32a6efe734538f5d56b1114d213cfc"} Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.831989 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.832051 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.832173 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.832455 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.831924 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.832956 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.832994 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.833003 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.833719 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.833734 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.833736 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.833748 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.833752 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.833755 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.834364 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.834384 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.834391 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.937098 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.937282 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.938389 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.938429 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:53 crc kubenswrapper[4637]: I1201 14:45:53.938442 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.416269 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.836952 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cba66cd67575cc0815740327a9654e9cd645f1b534c900fae6abf574486af8c5"} Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.836991 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4745ff3fc9e5b6f1098fe5c637ac8d4ad044d379f63b1ca3cc55860e6b88203a"} Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.837002 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"61ad8f66449e9c63ef064e28dbad535ca1ced286e4380a80b9a949a0f6564ec5"} Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.837011 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ad597e5c2389c17758b3fc9db8788270dffc87d0d28275fac201280fda804788"} Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.837019 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ae402c32018970263fb93d1de345040ca963771ae3f8b142b1e9c72617bd5fde"} Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.837012 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.837027 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.837208 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838254 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838273 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838281 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838340 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838390 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838409 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838353 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838437 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:54 crc kubenswrapper[4637]: I1201 14:45:54.838445 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:55 crc kubenswrapper[4637]: I1201 14:45:55.839482 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:55 crc kubenswrapper[4637]: I1201 14:45:55.839604 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:55 crc kubenswrapper[4637]: I1201 14:45:55.840645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:55 crc kubenswrapper[4637]: I1201 14:45:55.840732 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:55 crc kubenswrapper[4637]: I1201 14:45:55.840823 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:55 crc kubenswrapper[4637]: I1201 14:45:55.840902 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:55 crc kubenswrapper[4637]: I1201 14:45:55.841004 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:55 crc kubenswrapper[4637]: I1201 14:45:55.841048 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.125005 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.362897 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.363088 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.364152 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.364191 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.364202 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.764189 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.770177 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.842119 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.842150 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.843237 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.843267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.843276 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.843290 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.843341 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:56 crc kubenswrapper[4637]: I1201 14:45:56.843361 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:57 crc kubenswrapper[4637]: I1201 14:45:57.844801 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:57 crc kubenswrapper[4637]: I1201 14:45:57.846108 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:57 crc kubenswrapper[4637]: I1201 14:45:57.846223 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:57 crc kubenswrapper[4637]: I1201 14:45:57.846268 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:58 crc kubenswrapper[4637]: I1201 14:45:58.168396 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 01 14:45:58 crc kubenswrapper[4637]: I1201 14:45:58.168643 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:45:58 crc kubenswrapper[4637]: I1201 14:45:58.170647 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:45:58 crc kubenswrapper[4637]: I1201 14:45:58.170693 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:45:58 crc kubenswrapper[4637]: I1201 14:45:58.170704 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:45:59 crc kubenswrapper[4637]: E1201 14:45:59.921534 4637 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 14:46:00 crc kubenswrapper[4637]: I1201 14:46:00.879643 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:46:00 crc kubenswrapper[4637]: I1201 14:46:00.879847 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:46:00 crc kubenswrapper[4637]: I1201 14:46:00.881380 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:00 crc kubenswrapper[4637]: I1201 14:46:00.881419 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:00 crc kubenswrapper[4637]: I1201 14:46:00.881432 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:00 crc kubenswrapper[4637]: I1201 14:46:00.886618 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:46:01 crc kubenswrapper[4637]: I1201 14:46:01.855374 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:46:01 crc kubenswrapper[4637]: I1201 14:46:01.856215 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:01 crc kubenswrapper[4637]: I1201 14:46:01.856250 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:01 crc kubenswrapper[4637]: I1201 14:46:01.856261 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:02 crc kubenswrapper[4637]: I1201 14:46:02.697342 4637 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 01 14:46:02 crc kubenswrapper[4637]: E1201 14:46:02.727908 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 01 14:46:02 crc kubenswrapper[4637]: W1201 14:46:02.946399 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 14:46:02 crc kubenswrapper[4637]: I1201 14:46:02.946482 4637 trace.go:236] Trace[1888713424]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 14:45:52.945) (total time: 10001ms): Dec 01 14:46:02 crc kubenswrapper[4637]: Trace[1888713424]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (14:46:02.946) Dec 01 14:46:02 crc kubenswrapper[4637]: Trace[1888713424]: [10.001066496s] [10.001066496s] END Dec 01 14:46:02 crc kubenswrapper[4637]: E1201 14:46:02.946500 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 14:46:03 crc kubenswrapper[4637]: E1201 14:46:03.032522 4637 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 01 14:46:03 crc kubenswrapper[4637]: W1201 14:46:03.032620 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 14:46:03 crc kubenswrapper[4637]: I1201 14:46:03.032699 4637 trace.go:236] Trace[1717875060]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 14:45:53.031) (total time: 10001ms): Dec 01 14:46:03 crc kubenswrapper[4637]: Trace[1717875060]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (14:46:03.032) Dec 01 14:46:03 crc kubenswrapper[4637]: Trace[1717875060]: [10.001024843s] [10.001024843s] END Dec 01 14:46:03 crc kubenswrapper[4637]: E1201 14:46:03.032720 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 14:46:03 crc kubenswrapper[4637]: I1201 14:46:03.171685 4637 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 14:46:03 crc kubenswrapper[4637]: I1201 14:46:03.171755 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 14:46:03 crc kubenswrapper[4637]: W1201 14:46:03.321521 4637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 14:46:03 crc kubenswrapper[4637]: I1201 14:46:03.321607 4637 trace.go:236] Trace[612193430]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 14:45:53.320) (total time: 10001ms): Dec 01 14:46:03 crc kubenswrapper[4637]: Trace[612193430]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:46:03.321) Dec 01 14:46:03 crc kubenswrapper[4637]: Trace[612193430]: [10.001087346s] [10.001087346s] END Dec 01 14:46:03 crc kubenswrapper[4637]: E1201 14:46:03.321629 4637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 14:46:03 crc kubenswrapper[4637]: I1201 14:46:03.656679 4637 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 14:46:03 crc kubenswrapper[4637]: I1201 14:46:03.656759 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 14:46:03 crc kubenswrapper[4637]: I1201 14:46:03.880039 4637 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 14:46:03 crc kubenswrapper[4637]: I1201 14:46:03.880128 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.156234 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.156435 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.160960 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.161406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.161424 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.178775 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.233469 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.234979 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.235047 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.235062 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.235086 4637 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 14:46:06 crc kubenswrapper[4637]: E1201 14:46:06.240140 4637 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.865466 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.866251 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.866279 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:06 crc kubenswrapper[4637]: I1201 14:46:06.866287 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:07 crc kubenswrapper[4637]: I1201 14:46:07.493381 4637 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.176740 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.176886 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.177921 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.177971 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.177980 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.183946 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.371895 4637 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.639676 4637 trace.go:236] Trace[1839972261]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 14:45:53.808) (total time: 14831ms): Dec 01 14:46:08 crc kubenswrapper[4637]: Trace[1839972261]: ---"Objects listed" error: 14831ms (14:46:08.639) Dec 01 14:46:08 crc kubenswrapper[4637]: Trace[1839972261]: [14.831223208s] [14.831223208s] END Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.639704 4637 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.641022 4637 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.661424 4637 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57798->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.661419 4637 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35750->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.661474 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57798->192.168.126.11:17697: read: connection reset by peer" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.661497 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35750->192.168.126.11:17697: read: connection reset by peer" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.661847 4637 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.661952 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.699309 4637 apiserver.go:52] "Watching apiserver" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.701174 4637 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.701635 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.702046 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.702092 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.702208 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.702219 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.702263 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.702606 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.702656 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.702667 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.702716 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.703646 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.704434 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.705498 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.705893 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.707081 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.707288 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.707497 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.707677 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.708058 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.719037 4637 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.737388 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.741830 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.741889 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.741997 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742025 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742051 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742073 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742097 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742121 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742145 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742168 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742197 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742222 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742244 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742266 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742291 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742314 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742338 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742361 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742383 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742408 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742431 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742453 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742476 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742500 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742523 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742544 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742567 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742589 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742612 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742633 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742653 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742676 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742740 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742788 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742812 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742845 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742875 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742908 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742902 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742899 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742953 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.742992 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743010 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743080 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743102 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743120 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743149 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743168 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743184 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743199 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743217 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743237 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743252 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743267 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743284 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743300 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743316 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743334 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743349 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743364 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743379 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743394 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743410 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743425 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743455 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743470 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743531 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743548 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743565 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743580 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743599 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743618 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743633 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743650 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743666 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743681 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743699 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743715 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743731 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743747 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743770 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743786 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743802 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743819 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743834 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743854 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743877 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743898 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744005 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744023 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744039 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744056 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744075 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744094 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744111 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744128 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744143 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744160 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744180 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744218 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744235 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744250 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744271 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744286 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744302 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744319 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744334 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744350 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744366 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744382 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744399 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744414 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744429 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744445 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744461 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744477 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744493 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744508 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744525 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744541 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744558 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744577 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744593 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744609 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744626 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744643 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744659 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744676 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744693 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744709 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744725 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744741 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744757 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744773 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744789 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744804 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744819 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744834 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744850 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744866 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744882 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744898 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744914 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744945 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744967 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744987 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745011 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745031 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745052 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745070 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745087 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745105 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745134 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745150 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745167 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745186 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745205 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745223 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745238 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745253 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745270 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745286 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745301 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745318 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745335 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745350 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745368 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745383 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745401 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745417 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745433 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745450 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745467 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745486 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745504 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745521 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745538 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745555 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745572 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745590 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745607 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745622 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745641 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745660 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745698 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745716 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745732 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745748 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745765 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745783 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745802 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745820 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745837 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745854 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745871 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745890 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745906 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745923 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746158 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746197 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746219 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746246 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746265 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746285 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746308 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746327 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746347 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746366 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746385 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746403 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746434 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746453 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746485 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746547 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746560 4637 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746571 4637 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746582 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743214 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.750986 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743217 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743323 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743381 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743431 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743456 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743516 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743538 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743611 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743702 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743756 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743849 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.743942 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744130 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744332 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744495 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744504 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744528 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744605 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744735 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.744945 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745249 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745304 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.745483 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746045 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746327 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746513 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746747 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746892 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.746991 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.747153 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.747376 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.748602 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.748754 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.748807 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.749102 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.749217 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.749411 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.749488 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.749506 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.749724 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.749761 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.750008 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.750165 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.750178 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.750426 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.750867 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.750878 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.751352 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.751491 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.751628 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.751730 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.751856 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.751891 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.753081 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.753327 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.753654 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.753751 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.753999 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.754379 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.755213 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.755276 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.755487 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.755762 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.756636 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.756732 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.757152 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.757401 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.757556 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.757737 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.757896 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.757956 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.758415 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.758551 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.758737 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.758917 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.758962 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.760013 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.760209 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.760783 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.761104 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.761118 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.761331 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.761571 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.761657 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.761731 4637 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.761851 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.762081 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.762201 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.762299 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.762584 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.762803 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.762237 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.762996 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.763172 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.763242 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.763441 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.763640 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.763733 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.763873 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.764774 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.765093 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.765131 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.765318 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.765371 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.765448 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.765515 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.765695 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.765915 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.763114 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.766346 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.766687 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.767044 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.767244 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.767850 4637 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.767987 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:09.267887274 +0000 UTC m=+19.785596212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.768357 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.768403 4637 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.768434 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:09.268424788 +0000 UTC m=+19.786133616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.768529 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.769439 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.769536 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.770146 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.775175 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.776264 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.776345 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.776416 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.776835 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.780082 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.780473 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.780718 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.780995 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.781071 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.781316 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.781494 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.781911 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.782370 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.782546 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.783139 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.783262 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.783272 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.783513 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.783522 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.783572 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.783799 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.783832 4637 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.783909 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:09.283862339 +0000 UTC m=+19.801571167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.784013 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:46:09.284006133 +0000 UTC m=+19.801714961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.784211 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.784259 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.784457 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.784795 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.784895 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.785220 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.785364 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.785394 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.785443 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.785819 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.786403 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.787242 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.787272 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.787443 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.787649 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.783593 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.787685 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.787695 4637 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:08 crc kubenswrapper[4637]: E1201 14:46:08.787727 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:09.287717004 +0000 UTC m=+19.805425822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.787800 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.787953 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.788156 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.788156 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.788417 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.788693 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.788797 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.789179 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.789191 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.789736 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.789823 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.789922 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.790113 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.790138 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.790757 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.790779 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.790875 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.791048 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.791269 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.791311 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.791349 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.791355 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.791594 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.791961 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.792062 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.792371 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.792625 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.796035 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.797409 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.798524 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.799048 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.799801 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.800457 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.800888 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.801005 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.802024 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5vnrh"] Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.802606 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5vnrh" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.806133 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.806223 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.806262 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.808292 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.808518 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.808516 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.809336 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.812576 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.814148 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.815385 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.815886 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.815978 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.826646 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.833405 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.839589 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847341 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62lk\" (UniqueName: \"kubernetes.io/projected/dbbdafb4-bc82-462e-be58-844b876172c2-kube-api-access-s62lk\") pod \"node-resolver-5vnrh\" (UID: \"dbbdafb4-bc82-462e-be58-844b876172c2\") " pod="openshift-dns/node-resolver-5vnrh" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847380 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847419 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847477 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dbbdafb4-bc82-462e-be58-844b876172c2-hosts-file\") pod \"node-resolver-5vnrh\" (UID: \"dbbdafb4-bc82-462e-be58-844b876172c2\") " pod="openshift-dns/node-resolver-5vnrh" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847533 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847542 4637 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847551 4637 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847560 4637 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847567 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847576 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847584 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847592 4637 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847599 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847608 4637 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847617 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847625 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847633 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847641 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847650 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847658 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847666 4637 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847674 4637 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847682 4637 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847691 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847699 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847707 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847715 4637 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847723 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847732 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847740 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847747 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847756 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847764 4637 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847772 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847779 4637 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847787 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847795 4637 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847803 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847811 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847819 4637 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847827 4637 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847834 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847843 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847851 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847858 4637 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847866 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847874 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847882 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847890 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847897 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847906 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847913 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847921 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847946 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847953 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847961 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847969 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847977 4637 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847984 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.847999 4637 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848007 4637 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848015 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848022 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848030 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848038 4637 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848045 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848055 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848062 4637 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848070 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848078 4637 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848086 4637 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848094 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848102 4637 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848112 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848120 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848128 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848136 4637 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848144 4637 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848151 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848159 4637 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848167 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848174 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848182 4637 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848190 4637 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848197 4637 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848204 4637 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848212 4637 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848220 4637 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848228 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848235 4637 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848242 4637 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848250 4637 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848258 4637 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848266 4637 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848274 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848282 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848290 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848298 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848306 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848313 4637 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848321 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848329 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848338 4637 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848345 4637 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848353 4637 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848360 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848368 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848376 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848385 4637 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848399 4637 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848406 4637 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848413 4637 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848421 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848428 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848436 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848443 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848453 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848462 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848470 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848477 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848486 4637 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848493 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848501 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848508 4637 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848516 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848524 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848533 4637 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848541 4637 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848548 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848558 4637 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848568 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848578 4637 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848588 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848598 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848608 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848615 4637 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848623 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848631 4637 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848638 4637 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848646 4637 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848654 4637 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848661 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848669 4637 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848677 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848685 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848692 4637 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848700 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848708 4637 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848723 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848731 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848739 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848747 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848755 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848763 4637 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848770 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848778 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848785 4637 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848793 4637 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848801 4637 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848808 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848816 4637 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848824 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848831 4637 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848839 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848847 4637 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848854 4637 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848861 4637 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848869 4637 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848877 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848884 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848892 4637 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848900 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848908 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848916 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848925 4637 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848947 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848954 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848961 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848969 4637 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848976 4637 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848985 4637 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.848993 4637 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849000 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849009 4637 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849017 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849025 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849033 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849041 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849049 4637 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849057 4637 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849065 4637 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849073 4637 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849081 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849329 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.849604 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.855740 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.858862 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.870538 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.871283 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.885263 4637 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c" exitCode=255 Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.885317 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c"} Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.889781 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.906606 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.925591 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.930160 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.930565 4637 scope.go:117] "RemoveContainer" containerID="b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.939227 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.950116 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dbbdafb4-bc82-462e-be58-844b876172c2-hosts-file\") pod \"node-resolver-5vnrh\" (UID: \"dbbdafb4-bc82-462e-be58-844b876172c2\") " pod="openshift-dns/node-resolver-5vnrh" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.950152 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s62lk\" (UniqueName: \"kubernetes.io/projected/dbbdafb4-bc82-462e-be58-844b876172c2-kube-api-access-s62lk\") pod \"node-resolver-5vnrh\" (UID: \"dbbdafb4-bc82-462e-be58-844b876172c2\") " pod="openshift-dns/node-resolver-5vnrh" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.950175 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.950419 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dbbdafb4-bc82-462e-be58-844b876172c2-hosts-file\") pod \"node-resolver-5vnrh\" (UID: \"dbbdafb4-bc82-462e-be58-844b876172c2\") " pod="openshift-dns/node-resolver-5vnrh" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.969617 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.983128 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s62lk\" (UniqueName: \"kubernetes.io/projected/dbbdafb4-bc82-462e-be58-844b876172c2-kube-api-access-s62lk\") pod \"node-resolver-5vnrh\" (UID: \"dbbdafb4-bc82-462e-be58-844b876172c2\") " pod="openshift-dns/node-resolver-5vnrh" Dec 01 14:46:08 crc kubenswrapper[4637]: I1201 14:46:08.996241 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.021150 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.027054 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.034272 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 14:46:09 crc kubenswrapper[4637]: W1201 14:46:09.046432 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6a9eb49b68d94b26b14830b6f2b6118d213c75af23a2944f36345f459bddcd53 WatchSource:0}: Error finding container 6a9eb49b68d94b26b14830b6f2b6118d213c75af23a2944f36345f459bddcd53: Status 404 returned error can't find the container with id 6a9eb49b68d94b26b14830b6f2b6118d213c75af23a2944f36345f459bddcd53 Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.047877 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:09 crc kubenswrapper[4637]: W1201 14:46:09.048677 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-512e7568c02b84b02875513971e645a15564985237fe0e428dc5c52c64fbada1 WatchSource:0}: Error finding container 512e7568c02b84b02875513971e645a15564985237fe0e428dc5c52c64fbada1: Status 404 returned error can't find the container with id 512e7568c02b84b02875513971e645a15564985237fe0e428dc5c52c64fbada1 Dec 01 14:46:09 crc kubenswrapper[4637]: W1201 14:46:09.051002 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b5e29e4fadeef0dacd4d7ef2cacd8a52de54db38051651fbbaaeb4b8d7911e19 WatchSource:0}: Error finding container b5e29e4fadeef0dacd4d7ef2cacd8a52de54db38051651fbbaaeb4b8d7911e19: Status 404 returned error can't find the container with id b5e29e4fadeef0dacd4d7ef2cacd8a52de54db38051651fbbaaeb4b8d7911e19 Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.068622 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.123447 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5vnrh" Dec 01 14:46:09 crc kubenswrapper[4637]: W1201 14:46:09.133278 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbbdafb4_bc82_462e_be58_844b876172c2.slice/crio-803bd026bc84576f677b8ab884fad82e7582383cecba95733d696cc414f46754 WatchSource:0}: Error finding container 803bd026bc84576f677b8ab884fad82e7582383cecba95733d696cc414f46754: Status 404 returned error can't find the container with id 803bd026bc84576f677b8ab884fad82e7582383cecba95733d696cc414f46754 Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.139350 4637 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.353844 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.353910 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.353950 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.354003 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.354022 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354046 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:46:10.354023441 +0000 UTC m=+20.871732269 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354127 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354143 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354148 4637 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354153 4637 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354182 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:10.354175085 +0000 UTC m=+20.871883903 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354195 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:10.354189395 +0000 UTC m=+20.871898223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354216 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354223 4637 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354226 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354234 4637 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354245 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:10.354239146 +0000 UTC m=+20.871947974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:09 crc kubenswrapper[4637]: E1201 14:46:09.354289 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:10.354272877 +0000 UTC m=+20.871981795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.774707 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.775456 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.776473 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.777046 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.777948 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.778390 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.778895 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.779738 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.780339 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.781245 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.781727 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.782692 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.783138 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.783604 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.785671 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.786167 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.787077 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.787419 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.787974 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.788885 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.789336 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.790307 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.790832 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.792709 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.793385 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.794458 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.795743 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.796398 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.797584 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.798136 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.799080 4637 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.799195 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.800884 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.801737 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.802158 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.803847 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.804503 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.805394 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.806062 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.807049 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.807539 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.808695 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.809416 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.810439 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.811000 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.811032 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.811848 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.812385 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.813465 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.813992 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.814744 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.815192 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.816058 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.816579 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.817134 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.831210 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.846243 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.857660 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.870372 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.881608 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.889564 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.891235 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.891534 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.892316 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b5e29e4fadeef0dacd4d7ef2cacd8a52de54db38051651fbbaaeb4b8d7911e19"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.893436 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.893460 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"512e7568c02b84b02875513971e645a15564985237fe0e428dc5c52c64fbada1"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.894217 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.894482 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5vnrh" event={"ID":"dbbdafb4-bc82-462e-be58-844b876172c2","Type":"ContainerStarted","Data":"0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.894545 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5vnrh" event={"ID":"dbbdafb4-bc82-462e-be58-844b876172c2","Type":"ContainerStarted","Data":"803bd026bc84576f677b8ab884fad82e7582383cecba95733d696cc414f46754"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.895852 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.895880 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.895891 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6a9eb49b68d94b26b14830b6f2b6118d213c75af23a2944f36345f459bddcd53"} Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.911509 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.940624 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:09 crc kubenswrapper[4637]: I1201 14:46:09.983942 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.046599 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.088002 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.134765 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.197472 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.235344 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.249495 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.376133 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.376218 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.376240 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.376280 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376309 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:46:12.376283206 +0000 UTC m=+22.893992034 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.376360 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376392 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376408 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376434 4637 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376473 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:12.376460412 +0000 UTC m=+22.894169240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376479 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376492 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376503 4637 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376535 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:12.376528353 +0000 UTC m=+22.894237181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376577 4637 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376597 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:12.376591675 +0000 UTC m=+22.894300503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376623 4637 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.376640 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:12.376635126 +0000 UTC m=+22.894343954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.483265 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-p7rjd"] Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.483858 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.485294 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-n2brl"] Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.485641 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.489164 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.489342 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.489533 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-d5895"] Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.489729 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.490174 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.490381 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.491185 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.492650 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.493013 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.493153 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 14:46:10 crc kubenswrapper[4637]: W1201 14:46:10.493477 4637 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.493515 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.493522 4637 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.493970 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.493989 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.517996 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.535496 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.568447 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578262 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-conf-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578331 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njg79\" (UniqueName: \"kubernetes.io/projected/4131bcca-3504-4255-879d-7921162a335c-kube-api-access-njg79\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578407 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjm8r\" (UniqueName: \"kubernetes.io/projected/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-kube-api-access-sjm8r\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578499 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-multus-certs\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578546 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-netns\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578578 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-etc-kubernetes\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578604 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578637 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-proxy-tls\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578663 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-cnibin\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578720 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-rootfs\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578739 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-system-cni-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578755 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-socket-dir-parent\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578816 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-os-release\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578835 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-hostroot\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578851 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxfmt\" (UniqueName: \"kubernetes.io/projected/f64d8237-8116-4742-8d7f-9f6e8018e4c2-kube-api-access-xxfmt\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578885 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4131bcca-3504-4255-879d-7921162a335c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.578905 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-cni-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579021 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-mcd-auth-proxy-config\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579054 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-cnibin\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579076 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4131bcca-3504-4255-879d-7921162a335c-cni-binary-copy\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579094 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-k8s-cni-cncf-io\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579142 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-daemon-config\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579162 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f64d8237-8116-4742-8d7f-9f6e8018e4c2-cni-binary-copy\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579189 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-system-cni-dir\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579208 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-kubelet\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579399 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-cni-multus\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579507 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-os-release\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.579581 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-cni-bin\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.585757 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.601197 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.624160 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.636845 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.652690 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.663514 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.675283 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680411 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680449 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-proxy-tls\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680500 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-cnibin\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680516 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-netns\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680536 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-etc-kubernetes\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680558 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-rootfs\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680572 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-system-cni-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680587 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-socket-dir-parent\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680603 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-os-release\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680617 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-hostroot\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680631 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4131bcca-3504-4255-879d-7921162a335c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680644 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-cni-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680663 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxfmt\" (UniqueName: \"kubernetes.io/projected/f64d8237-8116-4742-8d7f-9f6e8018e4c2-kube-api-access-xxfmt\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680685 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-mcd-auth-proxy-config\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680707 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-cnibin\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680725 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4131bcca-3504-4255-879d-7921162a335c-cni-binary-copy\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680741 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-k8s-cni-cncf-io\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680769 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-daemon-config\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680783 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f64d8237-8116-4742-8d7f-9f6e8018e4c2-cni-binary-copy\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680802 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-system-cni-dir\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680862 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-kubelet\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680898 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-os-release\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680913 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-cni-bin\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680948 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-cni-multus\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680964 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-conf-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680982 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njg79\" (UniqueName: \"kubernetes.io/projected/4131bcca-3504-4255-879d-7921162a335c-kube-api-access-njg79\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680997 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjm8r\" (UniqueName: \"kubernetes.io/projected/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-kube-api-access-sjm8r\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681013 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-multus-certs\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.680995 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681076 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-multus-certs\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681178 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-cnibin\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681270 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-cnibin\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681322 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-netns\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681375 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-etc-kubernetes\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681432 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-rootfs\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681501 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-system-cni-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681636 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-socket-dir-parent\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681777 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-cni-multus\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.681905 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-run-k8s-cni-cncf-io\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682115 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-os-release\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682136 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-os-release\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682152 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-cni-bin\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682154 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-cni-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682190 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-hostroot\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682500 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-daemon-config\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682663 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4131bcca-3504-4255-879d-7921162a335c-cni-binary-copy\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682739 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-multus-conf-dir\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682761 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4131bcca-3504-4255-879d-7921162a335c-system-cni-dir\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682773 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4131bcca-3504-4255-879d-7921162a335c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682795 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f64d8237-8116-4742-8d7f-9f6e8018e4c2-host-var-lib-kubelet\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682838 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-mcd-auth-proxy-config\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.682912 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f64d8237-8116-4742-8d7f-9f6e8018e4c2-cni-binary-copy\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.690663 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-proxy-tls\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.692905 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.696538 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxfmt\" (UniqueName: \"kubernetes.io/projected/f64d8237-8116-4742-8d7f-9f6e8018e4c2-kube-api-access-xxfmt\") pod \"multus-n2brl\" (UID: \"f64d8237-8116-4742-8d7f-9f6e8018e4c2\") " pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.702165 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njg79\" (UniqueName: \"kubernetes.io/projected/4131bcca-3504-4255-879d-7921162a335c-kube-api-access-njg79\") pod \"multus-additional-cni-plugins-d5895\" (UID: \"4131bcca-3504-4255-879d-7921162a335c\") " pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.702347 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjm8r\" (UniqueName: \"kubernetes.io/projected/2db6c86b-ff8c-4746-9c91-7dac0498c0b9-kube-api-access-sjm8r\") pod \"machine-config-daemon-p7rjd\" (UID: \"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\") " pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.711822 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.733499 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.748373 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.764603 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.770334 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.770460 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.770546 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.770609 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.770651 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:10 crc kubenswrapper[4637]: E1201 14:46:10.770695 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.782731 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.792653 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.800084 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.804294 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.808378 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-n2brl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.833059 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.851551 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.892175 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.909039 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.909543 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rhl62"] Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.910386 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.910484 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n2brl" event={"ID":"f64d8237-8116-4742-8d7f-9f6e8018e4c2","Type":"ContainerStarted","Data":"b8af61757572f11deab2f7b45413db2aa0c1dcd017222cd993f1a29b6d8584ae"} Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.912200 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"cb4bc7e99fbbd47e312685c512e6292eabf38cb295a4351f6feed6ba5cb389e5"} Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.913024 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.913044 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.913170 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.913189 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.922777 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.922838 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.923120 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.923379 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.923523 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.937768 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.974389 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.983730 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-ovn\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.983808 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-log-socket\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.983852 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-bin\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.983875 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-slash\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.983908 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.983957 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-netd\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.983984 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-config\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984026 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88g4p\" (UniqueName: \"kubernetes.io/projected/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-kube-api-access-88g4p\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984056 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-node-log\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984072 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovn-node-metrics-cert\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984109 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-script-lib\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984137 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984196 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984240 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-netns\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984271 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-etc-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984326 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-kubelet\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984616 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-systemd\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984681 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-var-lib-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984714 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-systemd-units\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:10 crc kubenswrapper[4637]: I1201 14:46:10.984736 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-env-overrides\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.008008 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.057157 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.077192 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.085786 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-bin\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.085846 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-slash\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.085869 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.085891 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88g4p\" (UniqueName: \"kubernetes.io/projected/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-kube-api-access-88g4p\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.085912 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-netd\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.085945 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-config\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.085973 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-node-log\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.085988 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovn-node-metrics-cert\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086003 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-script-lib\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086026 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086044 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086061 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-netns\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086084 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-etc-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086103 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-kubelet\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086120 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-systemd\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086139 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-var-lib-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086158 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-systemd-units\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086202 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-env-overrides\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086221 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-ovn\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086243 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-log-socket\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086315 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-log-socket\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086379 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-bin\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086416 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-slash\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086442 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.086788 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-netd\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.087459 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-etc-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.087546 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-node-log\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.087558 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-kubelet\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.087644 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-systemd\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.087685 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-var-lib-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.087712 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-systemd-units\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.088174 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.088233 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-netns\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.088250 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-env-overrides\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.088271 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-ovn\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.088385 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-openvswitch\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.088732 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-config\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.088976 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-script-lib\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.093673 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovn-node-metrics-cert\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.101190 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.114084 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88g4p\" (UniqueName: \"kubernetes.io/projected/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-kube-api-access-88g4p\") pod \"ovnkube-node-rhl62\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.131162 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.168209 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.183171 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.204327 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.218472 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.231205 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.232277 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:11 crc kubenswrapper[4637]: W1201 14:46:11.246554 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4fc1be7_621f_4fdc_bc4d_08b6b9e9e831.slice/crio-a212d35dc564b2ac9901c0e0d33b9ebd4c79caec187c26475edbeee582374122 WatchSource:0}: Error finding container a212d35dc564b2ac9901c0e0d33b9ebd4c79caec187c26475edbeee582374122: Status 404 returned error can't find the container with id a212d35dc564b2ac9901c0e0d33b9ebd4c79caec187c26475edbeee582374122 Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.246668 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.273640 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.284555 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.312663 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.331408 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.352123 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.369130 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.390849 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.411886 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.425205 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.445176 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.521300 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.526579 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d5895" Dec 01 14:46:11 crc kubenswrapper[4637]: W1201 14:46:11.538996 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4131bcca_3504_4255_879d_7921162a335c.slice/crio-dc884d16513fd6bb3e637c3584edc5613de172f5c2316c193a2cc600b236cb25 WatchSource:0}: Error finding container dc884d16513fd6bb3e637c3584edc5613de172f5c2316c193a2cc600b236cb25: Status 404 returned error can't find the container with id dc884d16513fd6bb3e637c3584edc5613de172f5c2316c193a2cc600b236cb25 Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.917120 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72"} Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.917155 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8"} Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.917887 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" event={"ID":"4131bcca-3504-4255-879d-7921162a335c","Type":"ContainerStarted","Data":"dc884d16513fd6bb3e637c3584edc5613de172f5c2316c193a2cc600b236cb25"} Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.919000 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004"} Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.920342 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n2brl" event={"ID":"f64d8237-8116-4742-8d7f-9f6e8018e4c2","Type":"ContainerStarted","Data":"837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da"} Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.921429 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405" exitCode=0 Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.921487 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405"} Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.921513 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"a212d35dc564b2ac9901c0e0d33b9ebd4c79caec187c26475edbeee582374122"} Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.930141 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.944983 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.961747 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.972519 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.981242 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:11 crc kubenswrapper[4637]: I1201 14:46:11.992770 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:11Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.007057 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.020825 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.034722 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.056255 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.068962 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.079572 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.106180 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.117062 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.132494 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.144830 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.157324 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.169964 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.184259 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.204791 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.224727 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.235031 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.251700 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.264362 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.280603 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.297085 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.402282 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.402372 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.402398 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.402418 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.402443 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402545 4637 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402608 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402624 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402635 4637 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402710 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402723 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402733 4637 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402624 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:16.402602942 +0000 UTC m=+26.920311770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402775 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:16.402760506 +0000 UTC m=+26.920469414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402786 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:16.402780746 +0000 UTC m=+26.920489574 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402553 4637 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402809 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:16.402804307 +0000 UTC m=+26.920513125 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.402851 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:46:16.402846338 +0000 UTC m=+26.920555156 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.640591 4637 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.649679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.649738 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.649751 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.649889 4637 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.660722 4637 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.661348 4637 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.662744 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.662794 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.662809 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.662829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.662842 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:12Z","lastTransitionTime":"2025-12-01T14:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.685387 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.690536 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.690567 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.690576 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.690594 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.690607 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:12Z","lastTransitionTime":"2025-12-01T14:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.711188 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.714894 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.714920 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.714944 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.714958 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.714967 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:12Z","lastTransitionTime":"2025-12-01T14:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.728597 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.732315 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.732352 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.732360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.732376 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.732387 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:12Z","lastTransitionTime":"2025-12-01T14:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.746580 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.750162 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.750203 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.750217 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.750234 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.750246 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:12Z","lastTransitionTime":"2025-12-01T14:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.764405 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.764521 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.766538 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.766580 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.766588 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.766605 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.766615 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:12Z","lastTransitionTime":"2025-12-01T14:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.771016 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.771128 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.771231 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.771527 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.771585 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:12 crc kubenswrapper[4637]: E1201 14:46:12.771651 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.868285 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.868317 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.868324 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.868339 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.868348 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:12Z","lastTransitionTime":"2025-12-01T14:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.925025 4637 generic.go:334] "Generic (PLEG): container finished" podID="4131bcca-3504-4255-879d-7921162a335c" containerID="b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e" exitCode=0 Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.925090 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" event={"ID":"4131bcca-3504-4255-879d-7921162a335c","Type":"ContainerDied","Data":"b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.929548 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.929583 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.929594 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.929604 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.929613 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.929621 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.943468 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.963277 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.975391 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.975428 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.975438 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.975452 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.975461 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:12Z","lastTransitionTime":"2025-12-01T14:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.980116 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:12 crc kubenswrapper[4637]: I1201 14:46:12.992152 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.001268 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:12Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.012135 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.025335 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.038979 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.050434 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.066592 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.079722 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.079760 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.079769 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.079783 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.079795 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.080069 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.092549 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.101954 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.181982 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.182023 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.182035 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.182059 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.182070 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.284541 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.284776 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.284851 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.284940 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.285016 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.387010 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.387137 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.387148 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.387160 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.387169 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.488794 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.489068 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.489152 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.489222 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.489291 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.592047 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.592430 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.592486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.592547 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.592599 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.694498 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.694703 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.694777 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.694842 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.694921 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.796946 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.796981 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.796990 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.797007 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.797020 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.899187 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.899231 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.899242 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.899258 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.899268 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:13Z","lastTransitionTime":"2025-12-01T14:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.934430 4637 generic.go:334] "Generic (PLEG): container finished" podID="4131bcca-3504-4255-879d-7921162a335c" containerID="ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d" exitCode=0 Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.934469 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" event={"ID":"4131bcca-3504-4255-879d-7921162a335c","Type":"ContainerDied","Data":"ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d"} Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.950878 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-blxft"] Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.951258 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:13 crc kubenswrapper[4637]: W1201 14:46:13.953397 4637 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 01 14:46:13 crc kubenswrapper[4637]: E1201 14:46:13.953574 4637 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 14:46:13 crc kubenswrapper[4637]: W1201 14:46:13.953687 4637 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 01 14:46:13 crc kubenswrapper[4637]: E1201 14:46:13.953767 4637 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 14:46:13 crc kubenswrapper[4637]: W1201 14:46:13.953870 4637 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: secrets "node-ca-dockercfg-4777p" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 01 14:46:13 crc kubenswrapper[4637]: E1201 14:46:13.953997 4637 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-ca-dockercfg-4777p\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 14:46:13 crc kubenswrapper[4637]: W1201 14:46:13.954191 4637 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: configmaps "image-registry-certificates" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 01 14:46:13 crc kubenswrapper[4637]: E1201 14:46:13.954309 4637 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-registry-certificates\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.962319 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.984423 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:13 crc kubenswrapper[4637]: I1201 14:46:13.996256 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:13Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.001395 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.001666 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.001679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.001693 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.001703 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.014418 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.021360 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92faa232-0163-4022-8f1c-ade68529f250-serviceca\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.021424 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccrq\" (UniqueName: \"kubernetes.io/projected/92faa232-0163-4022-8f1c-ade68529f250-kube-api-access-rccrq\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.021617 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92faa232-0163-4022-8f1c-ade68529f250-host\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.025773 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.035558 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.053610 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.065313 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.075812 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.090987 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.103501 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.103537 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.103549 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.103565 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.103575 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.105100 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.114408 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.122495 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92faa232-0163-4022-8f1c-ade68529f250-host\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.122554 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92faa232-0163-4022-8f1c-ade68529f250-serviceca\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.122568 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92faa232-0163-4022-8f1c-ade68529f250-host\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.122589 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccrq\" (UniqueName: \"kubernetes.io/projected/92faa232-0163-4022-8f1c-ade68529f250-kube-api-access-rccrq\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.124833 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.135638 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.145906 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.155561 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.166731 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.177553 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.196461 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.205747 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.205783 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.205792 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.206062 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.206079 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.214642 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.225176 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.235105 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.248428 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.262838 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.277353 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.289260 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.299805 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.308688 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.309199 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.309274 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.309346 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.309448 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.412837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.412906 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.412922 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.412962 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.413004 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.515771 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.515839 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.515854 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.515881 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.515897 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.618588 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.618631 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.618647 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.618669 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.618681 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.720768 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.720991 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.721065 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.721148 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.721220 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.771264 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.771975 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:14 crc kubenswrapper[4637]: E1201 14:46:14.772183 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.772229 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:14 crc kubenswrapper[4637]: E1201 14:46:14.772518 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:14 crc kubenswrapper[4637]: E1201 14:46:14.772585 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.824321 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.824349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.824360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.824374 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.824385 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.925545 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.925698 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.925753 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.925812 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.925865 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:14Z","lastTransitionTime":"2025-12-01T14:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.936539 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.940226 4637 generic.go:334] "Generic (PLEG): container finished" podID="4131bcca-3504-4255-879d-7921162a335c" containerID="8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701" exitCode=0 Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.940270 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" event={"ID":"4131bcca-3504-4255-879d-7921162a335c","Type":"ContainerDied","Data":"8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701"} Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.959697 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.976344 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:14 crc kubenswrapper[4637]: I1201 14:46:14.993614 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.009614 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.021715 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.027612 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.027637 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.027645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.027659 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.027670 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.034580 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.044503 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.103089 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: E1201 14:46:15.122984 4637 configmap.go:193] Couldn't get configMap openshift-image-registry/image-registry-certificates: failed to sync configmap cache: timed out waiting for the condition Dec 01 14:46:15 crc kubenswrapper[4637]: E1201 14:46:15.123080 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92faa232-0163-4022-8f1c-ade68529f250-serviceca podName:92faa232-0163-4022-8f1c-ade68529f250 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:15.623058478 +0000 UTC m=+26.140767306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/92faa232-0163-4022-8f1c-ade68529f250-serviceca") pod "node-ca-blxft" (UID: "92faa232-0163-4022-8f1c-ade68529f250") : failed to sync configmap cache: timed out waiting for the condition Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.127627 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.129774 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.129828 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.129840 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.129865 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.129878 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: E1201 14:46:15.136522 4637 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 01 14:46:15 crc kubenswrapper[4637]: E1201 14:46:15.136573 4637 projected.go:194] Error preparing data for projected volume kube-api-access-rccrq for pod openshift-image-registry/node-ca-blxft: failed to sync configmap cache: timed out waiting for the condition Dec 01 14:46:15 crc kubenswrapper[4637]: E1201 14:46:15.136638 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92faa232-0163-4022-8f1c-ade68529f250-kube-api-access-rccrq podName:92faa232-0163-4022-8f1c-ade68529f250 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:15.636617416 +0000 UTC m=+26.154326244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rccrq" (UniqueName: "kubernetes.io/projected/92faa232-0163-4022-8f1c-ade68529f250-kube-api-access-rccrq") pod "node-ca-blxft" (UID: "92faa232-0163-4022-8f1c-ade68529f250") : failed to sync configmap cache: timed out waiting for the condition Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.142273 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.165180 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.175526 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.184041 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.198194 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.199034 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.212606 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.231756 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.231789 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.231797 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.231810 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.231820 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.323460 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.334170 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.334227 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.334244 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.334268 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.334280 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.447424 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.447465 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.447477 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.447494 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.447505 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.554999 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.555050 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.555062 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.555083 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.555095 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.645347 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92faa232-0163-4022-8f1c-ade68529f250-serviceca\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.645534 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccrq\" (UniqueName: \"kubernetes.io/projected/92faa232-0163-4022-8f1c-ade68529f250-kube-api-access-rccrq\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.647864 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92faa232-0163-4022-8f1c-ade68529f250-serviceca\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.659335 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.659771 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.659781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.659801 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.659813 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.664055 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rccrq\" (UniqueName: \"kubernetes.io/projected/92faa232-0163-4022-8f1c-ade68529f250-kube-api-access-rccrq\") pod \"node-ca-blxft\" (UID: \"92faa232-0163-4022-8f1c-ade68529f250\") " pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.767566 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.767646 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.767665 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.767695 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.767720 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.770017 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-blxft" Dec 01 14:46:15 crc kubenswrapper[4637]: W1201 14:46:15.793845 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92faa232_0163_4022_8f1c_ade68529f250.slice/crio-be1cb4c9d158304dd6cf6e16c6e45d1652ffb4504b50d841374e611193b26bfb WatchSource:0}: Error finding container be1cb4c9d158304dd6cf6e16c6e45d1652ffb4504b50d841374e611193b26bfb: Status 404 returned error can't find the container with id be1cb4c9d158304dd6cf6e16c6e45d1652ffb4504b50d841374e611193b26bfb Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.870710 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.870790 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.870805 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.871301 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.871358 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.946021 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-blxft" event={"ID":"92faa232-0163-4022-8f1c-ade68529f250","Type":"ContainerStarted","Data":"be1cb4c9d158304dd6cf6e16c6e45d1652ffb4504b50d841374e611193b26bfb"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.948658 4637 generic.go:334] "Generic (PLEG): container finished" podID="4131bcca-3504-4255-879d-7921162a335c" containerID="52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d" exitCode=0 Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.949201 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" event={"ID":"4131bcca-3504-4255-879d-7921162a335c","Type":"ContainerDied","Data":"52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.954730 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.971469 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.984775 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.984814 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.984824 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.984843 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.984856 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:15Z","lastTransitionTime":"2025-12-01T14:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.985529 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:15 crc kubenswrapper[4637]: I1201 14:46:15.998348 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.024114 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.042029 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.056601 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.067817 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.082748 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.090690 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.090712 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.090720 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.090734 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.090743 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.100156 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.114387 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.130858 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.147275 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.158759 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.178849 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.192645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.192696 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.192706 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.192723 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.192735 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.294976 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.295025 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.295036 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.295052 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.295067 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.398432 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.398524 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.398548 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.398580 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.398606 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.454068 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.454279 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:46:24.454250393 +0000 UTC m=+34.971959231 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.454521 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.454647 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.454749 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.454864 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.454709 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.455124 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.454867 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.455303 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.455333 4637 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.454913 4637 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.455042 4637 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.455219 4637 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.455445 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:24.455411685 +0000 UTC m=+34.973120553 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.455912 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:24.455890319 +0000 UTC m=+34.973599167 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.456037 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:24.456025342 +0000 UTC m=+34.973734180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.456133 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:24.456123235 +0000 UTC m=+34.973832083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.502031 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.502324 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.502427 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.502542 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.502624 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.604393 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.604719 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.604825 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.604913 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.605019 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.707435 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.707481 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.707492 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.707506 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.707516 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.770895 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.770896 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.771019 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.771108 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.771215 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:16 crc kubenswrapper[4637]: E1201 14:46:16.771300 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.810378 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.810416 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.810427 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.810445 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.810459 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.913214 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.913249 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.913257 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.913272 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.913282 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:16Z","lastTransitionTime":"2025-12-01T14:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.959089 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-blxft" event={"ID":"92faa232-0163-4022-8f1c-ade68529f250","Type":"ContainerStarted","Data":"e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.963750 4637 generic.go:334] "Generic (PLEG): container finished" podID="4131bcca-3504-4255-879d-7921162a335c" containerID="66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45" exitCode=0 Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.963796 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" event={"ID":"4131bcca-3504-4255-879d-7921162a335c","Type":"ContainerDied","Data":"66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45"} Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.979425 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:16 crc kubenswrapper[4637]: I1201 14:46:16.996956 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.014600 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.023072 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.023125 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.023137 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.023160 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.023173 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.036795 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.050361 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.067869 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.092410 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.108495 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.120472 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.125273 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.125338 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.125348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.125361 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.125370 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.134401 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.147632 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.164893 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.178968 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.188684 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.200654 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.214231 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.227738 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.228066 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.228102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.228111 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.228126 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.228135 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.238259 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.248755 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.259764 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.271076 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.282288 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.295422 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.304447 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.316543 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.328031 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.330823 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.330858 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.330880 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.330895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.330906 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.341263 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.358332 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.433431 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.433777 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.433903 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.434069 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.434185 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.536553 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.536603 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.536614 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.536633 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.536645 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.639117 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.639153 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.639163 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.639177 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.639187 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.741779 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.741809 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.741818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.741830 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.741839 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.845297 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.845336 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.845349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.845362 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.845372 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.947958 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.948312 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.948325 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.948340 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.948351 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:17Z","lastTransitionTime":"2025-12-01T14:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.970320 4637 generic.go:334] "Generic (PLEG): container finished" podID="4131bcca-3504-4255-879d-7921162a335c" containerID="030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2" exitCode=0 Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.970368 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" event={"ID":"4131bcca-3504-4255-879d-7921162a335c","Type":"ContainerDied","Data":"030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.975664 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c"} Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.975962 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.975988 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:17 crc kubenswrapper[4637]: I1201 14:46:17.985795 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.001115 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:17Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.010516 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.017757 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.020151 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.033351 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.047724 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.052264 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.052298 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.052309 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.052327 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.052341 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.059190 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.075997 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.089467 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.102447 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.114592 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.126060 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.137345 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.148985 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.154538 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.154564 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.154572 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.154586 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.154595 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.159509 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.172086 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.182636 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.194030 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.205293 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.217742 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.233306 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.243304 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.257135 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.257176 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.257200 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.257218 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.257233 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.257628 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.269567 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.278898 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.294063 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.304162 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.314726 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.328805 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.359482 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.359645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.359756 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.359877 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.360007 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.462755 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.462819 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.462834 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.462860 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.462878 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.566437 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.566497 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.566512 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.566536 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.566552 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.669187 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.669283 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.669298 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.669316 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.669330 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.770453 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:18 crc kubenswrapper[4637]: E1201 14:46:18.770591 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.770471 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.770455 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:18 crc kubenswrapper[4637]: E1201 14:46:18.770672 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:18 crc kubenswrapper[4637]: E1201 14:46:18.770747 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.772055 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.772087 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.772097 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.772114 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.772126 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.874546 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.874589 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.874601 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.874616 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.874627 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.976706 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.976739 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.976746 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.976761 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.976770 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:18Z","lastTransitionTime":"2025-12-01T14:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.981756 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" event={"ID":"4131bcca-3504-4255-879d-7921162a335c","Type":"ContainerStarted","Data":"c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa"} Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.981806 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 14:46:18 crc kubenswrapper[4637]: I1201 14:46:18.996627 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:18Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.010189 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.031175 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.052348 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.066627 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.107333 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.107380 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.107389 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.107409 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.107421 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.142114 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.167110 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.180441 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.200730 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.209540 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.209581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.209593 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.209608 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.209619 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.219257 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.233264 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.247109 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.260379 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.274630 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.313920 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.313989 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.313999 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.314026 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.314037 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.417396 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.417452 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.417462 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.417479 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.417489 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.520505 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.520537 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.520545 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.520558 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.520567 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.623043 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.623099 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.623115 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.623138 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.623155 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.725803 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.725975 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.726001 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.726043 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.726065 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.784705 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.797231 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.809388 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.819374 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.827966 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.828023 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.828034 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.828072 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.828086 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.838982 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.852129 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.865656 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.877434 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.889654 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.902728 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.915415 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.929589 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.929620 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.929628 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.929641 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.929650 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:19Z","lastTransitionTime":"2025-12-01T14:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.930604 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.949316 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.965524 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.991589 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/0.log" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.994518 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c" exitCode=1 Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.995512 4637 scope.go:117] "RemoveContainer" containerID="26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c" Dec 01 14:46:19 crc kubenswrapper[4637]: I1201 14:46:19.995740 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.010463 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.031611 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.035855 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.035887 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.035898 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.035915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.035946 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.045282 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.059562 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.074440 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.090864 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:19Z\\\",\\\"message\\\":\\\"opping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1201 14:46:19.669567 5824 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 14:46:19.669695 5824 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 14:46:19.669709 5824 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 14:46:19.669720 5824 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:19.669725 5824 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:19.669751 5824 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 14:46:19.669790 5824 factory.go:656] Stopping watch factory\\\\nI1201 14:46:19.669806 5824 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 14:46:19.669814 5824 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 14:46:19.669821 5824 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 14:46:19.669828 5824 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:19.669835 5824 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:19.670186 5824 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.104151 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.116536 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.138831 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.138881 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.138895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.138914 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.138942 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.139583 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.187943 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.218996 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.235038 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.241914 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.242010 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.242020 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.242033 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.242045 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.253945 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.270312 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.347065 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.347098 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.347108 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.347121 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.347134 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.449194 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.449256 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.449268 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.449304 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.449317 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.551275 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.551324 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.551337 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.551355 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.551366 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.653860 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.653896 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.653905 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.653920 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.653944 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.756776 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.756841 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.756854 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.756888 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.756900 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.771087 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.771172 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:20 crc kubenswrapper[4637]: E1201 14:46:20.771211 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:20 crc kubenswrapper[4637]: E1201 14:46:20.771331 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.771102 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:20 crc kubenswrapper[4637]: E1201 14:46:20.771426 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.859213 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.859259 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.859269 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.859286 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.859299 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.961236 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.961272 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.961281 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.961296 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:20 crc kubenswrapper[4637]: I1201 14:46:20.961305 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:20Z","lastTransitionTime":"2025-12-01T14:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.000817 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/0.log" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.004259 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.004427 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.024269 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.041427 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.052366 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.064063 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.064101 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.064110 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.064123 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.064133 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.069470 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:19Z\\\",\\\"message\\\":\\\"opping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1201 14:46:19.669567 5824 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 14:46:19.669695 5824 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 14:46:19.669709 5824 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 14:46:19.669720 5824 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:19.669725 5824 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:19.669751 5824 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 14:46:19.669790 5824 factory.go:656] Stopping watch factory\\\\nI1201 14:46:19.669806 5824 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 14:46:19.669814 5824 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 14:46:19.669821 5824 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 14:46:19.669828 5824 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:19.669835 5824 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:19.670186 5824 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.082490 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.095068 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.111531 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.122629 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.134292 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.145306 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.158534 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.166836 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.166904 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.166927 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.166963 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.166975 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.169965 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.182757 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.194297 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:21Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.268749 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.268784 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.268793 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.268806 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.268853 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.370621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.370666 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.370678 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.370695 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.370707 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.473358 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.473408 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.473420 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.473438 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.473451 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.575485 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.575529 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.575541 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.575559 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.575569 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.677978 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.678022 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.678033 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.678046 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.678055 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.779535 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.779589 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.779598 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.779619 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.779633 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.882104 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.882138 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.882146 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.882158 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.882169 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.985322 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.985365 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.985376 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.985397 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:21 crc kubenswrapper[4637]: I1201 14:46:21.985409 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:21Z","lastTransitionTime":"2025-12-01T14:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.009184 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/1.log" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.009960 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/0.log" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.012758 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0" exitCode=1 Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.012798 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.012846 4637 scope.go:117] "RemoveContainer" containerID="26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.013833 4637 scope.go:117] "RemoveContainer" containerID="effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0" Dec 01 14:46:22 crc kubenswrapper[4637]: E1201 14:46:22.014004 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.031116 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.047493 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.063283 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.073745 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.087863 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.087947 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.087963 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.087983 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.087995 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.089562 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.104296 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.118358 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.131758 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.144670 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.161537 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.173198 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.189252 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.190183 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.190236 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.190262 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.190293 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.190313 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.215600 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:19Z\\\",\\\"message\\\":\\\"opping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1201 14:46:19.669567 5824 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 14:46:19.669695 5824 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 14:46:19.669709 5824 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 14:46:19.669720 5824 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:19.669725 5824 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:19.669751 5824 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 14:46:19.669790 5824 factory.go:656] Stopping watch factory\\\\nI1201 14:46:19.669806 5824 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 14:46:19.669814 5824 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 14:46:19.669821 5824 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 14:46:19.669828 5824 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:19.669835 5824 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:19.670186 5824 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.231083 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.294116 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.294211 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.294230 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.294259 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.294280 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.396575 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.396612 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.396624 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.396640 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.396652 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.499483 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.499540 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.499562 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.499590 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.499611 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.603001 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.603135 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.603157 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.603182 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.603199 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.706031 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.706145 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.706170 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.706197 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.706214 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.771013 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.771106 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:22 crc kubenswrapper[4637]: E1201 14:46:22.771156 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:22 crc kubenswrapper[4637]: E1201 14:46:22.771310 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.771435 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:22 crc kubenswrapper[4637]: E1201 14:46:22.771538 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.809010 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.809075 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.809086 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.809104 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.809119 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.861776 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc"] Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.862212 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.864797 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.865542 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.883548 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.897462 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.910898 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.911709 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.911833 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.912006 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.912105 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.912176 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.926238 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.926475 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.926552 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.926693 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.926787 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.929288 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: E1201 14:46:22.944530 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.947588 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.947922 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.948003 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.948023 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.948051 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.948069 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.948251 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80c54f70-de42-4510-8ae3-d5ef74e13ab7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.948283 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80c54f70-de42-4510-8ae3-d5ef74e13ab7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.948347 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7gdf\" (UniqueName: \"kubernetes.io/projected/80c54f70-de42-4510-8ae3-d5ef74e13ab7-kube-api-access-v7gdf\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.948393 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80c54f70-de42-4510-8ae3-d5ef74e13ab7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.960005 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: E1201 14:46:22.962093 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.965201 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.965270 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.965295 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.965324 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.965349 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.971482 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: E1201 14:46:22.977909 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.981629 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.981837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.981898 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.982005 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.982074 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.986171 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: E1201 14:46:22.995212 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.999069 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.999134 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.999167 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.999193 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:22 crc kubenswrapper[4637]: I1201 14:46:22.999224 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:22Z","lastTransitionTime":"2025-12-01T14:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.001087 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:22Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.017160 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/1.log" Dec 01 14:46:23 crc kubenswrapper[4637]: E1201 14:46:23.020508 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: E1201 14:46:23.020622 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.021699 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.021741 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.021754 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.021969 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.021983 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.022926 4637 scope.go:117] "RemoveContainer" containerID="effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.023191 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: E1201 14:46:23.023263 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.032980 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.049192 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80c54f70-de42-4510-8ae3-d5ef74e13ab7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.049261 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80c54f70-de42-4510-8ae3-d5ef74e13ab7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.049364 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80c54f70-de42-4510-8ae3-d5ef74e13ab7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.049386 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7gdf\" (UniqueName: \"kubernetes.io/projected/80c54f70-de42-4510-8ae3-d5ef74e13ab7-kube-api-access-v7gdf\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.050502 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80c54f70-de42-4510-8ae3-d5ef74e13ab7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.050511 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80c54f70-de42-4510-8ae3-d5ef74e13ab7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.054131 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d5a7401ac45aac82ff52352eac47a36454d45e5e9ef0e4d205b1a01f72b95c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:19Z\\\",\\\"message\\\":\\\"opping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1201 14:46:19.669567 5824 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 14:46:19.669695 5824 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 14:46:19.669709 5824 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 14:46:19.669720 5824 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:19.669725 5824 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:19.669751 5824 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 14:46:19.669790 5824 factory.go:656] Stopping watch factory\\\\nI1201 14:46:19.669806 5824 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 14:46:19.669814 5824 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 14:46:19.669821 5824 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 14:46:19.669828 5824 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:19.669835 5824 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:19.670186 5824 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.060793 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80c54f70-de42-4510-8ae3-d5ef74e13ab7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.067201 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7gdf\" (UniqueName: \"kubernetes.io/projected/80c54f70-de42-4510-8ae3-d5ef74e13ab7-kube-api-access-v7gdf\") pod \"ovnkube-control-plane-749d76644c-gdttc\" (UID: \"80c54f70-de42-4510-8ae3-d5ef74e13ab7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.072869 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.085010 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.099467 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.110776 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.122176 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.123682 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.123727 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.123743 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.123763 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.123775 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.139237 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.150094 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.161323 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.173046 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.173083 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.202023 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.214347 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.225514 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.225549 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.225567 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.225581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.225594 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.234619 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.244415 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.249290 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.261295 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.274253 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.283348 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.292847 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.304339 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.317515 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.327200 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.327231 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.327241 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.327254 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.327262 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.329565 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.342118 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.358068 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.367306 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.376276 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.387419 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.407669 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.420029 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.429336 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.429360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.429369 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.429381 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.429390 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.436786 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.448140 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.462998 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.475819 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.487175 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.503453 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.532346 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.532380 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.532390 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.532406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.532418 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.634600 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.634641 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.634651 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.634665 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.634675 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.736581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.736611 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.736621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.736634 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.736644 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.839016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.839071 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.839084 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.839106 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.839118 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.941570 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.941622 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.941633 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.941650 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.941663 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:23Z","lastTransitionTime":"2025-12-01T14:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.946903 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7w2l8"] Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.947375 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:23 crc kubenswrapper[4637]: E1201 14:46:23.947445 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.955536 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlds\" (UniqueName: \"kubernetes.io/projected/435e8f74-9c96-4508-b6a6-a1a2280f8176-kube-api-access-vdlds\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.955609 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.962112 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.974118 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:23 crc kubenswrapper[4637]: I1201 14:46:23.984288 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.001229 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:23Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.015011 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.025590 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" event={"ID":"80c54f70-de42-4510-8ae3-d5ef74e13ab7","Type":"ContainerStarted","Data":"94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.025633 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" event={"ID":"80c54f70-de42-4510-8ae3-d5ef74e13ab7","Type":"ContainerStarted","Data":"0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.025648 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" event={"ID":"80c54f70-de42-4510-8ae3-d5ef74e13ab7","Type":"ContainerStarted","Data":"775bc4ba7440cf914123ecddba78ba4a7f49f3e793503ad59649ceb36dc296df"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.029904 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.041532 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.043867 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.043888 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.043895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.043908 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.043918 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.054350 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.056815 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdlds\" (UniqueName: \"kubernetes.io/projected/435e8f74-9c96-4508-b6a6-a1a2280f8176-kube-api-access-vdlds\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.056864 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.057770 4637 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.061064 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs podName:435e8f74-9c96-4508-b6a6-a1a2280f8176 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:24.561041135 +0000 UTC m=+35.078749963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs") pod "network-metrics-daemon-7w2l8" (UID: "435e8f74-9c96-4508-b6a6-a1a2280f8176") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.066760 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.077737 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdlds\" (UniqueName: \"kubernetes.io/projected/435e8f74-9c96-4508-b6a6-a1a2280f8176-kube-api-access-vdlds\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.078947 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.089771 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.100776 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.109840 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.121576 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.133377 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.144498 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.145772 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.145826 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.145840 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.145857 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.145889 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.155800 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.166072 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.175991 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.187899 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.196718 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.206766 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.215337 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.225834 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.236044 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.248221 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.248251 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.248275 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.248288 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.248297 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.253053 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.266128 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.276753 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.287611 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.299672 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.315348 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.326970 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.350622 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.350659 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.350671 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.350686 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.350698 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.453329 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.453367 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.453379 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.453400 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.453410 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.458713 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.458794 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.458833 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.458854 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.458901 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:46:40.45887707 +0000 UTC m=+50.976585898 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.459144 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459242 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459270 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459283 4637 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459301 4637 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459324 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:40.459309942 +0000 UTC m=+50.977018820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459242 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459371 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459381 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:40.459368914 +0000 UTC m=+50.977077822 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459385 4637 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459242 4637 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459454 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:40.459443206 +0000 UTC m=+50.977152034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.459473 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:40.459465646 +0000 UTC m=+50.977174544 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.555281 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.555349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.555363 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.555378 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.555411 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.657700 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.657921 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.658078 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.658109 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.658119 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.660130 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.660219 4637 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.660261 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs podName:435e8f74-9c96-4508-b6a6-a1a2280f8176 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:25.660249319 +0000 UTC m=+36.177958137 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs") pod "network-metrics-daemon-7w2l8" (UID: "435e8f74-9c96-4508-b6a6-a1a2280f8176") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.760326 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.760415 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.760444 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.760479 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.760505 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.770570 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.770691 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.770833 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.770879 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.770994 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:24 crc kubenswrapper[4637]: E1201 14:46:24.771157 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.862793 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.862885 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.862909 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.863463 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.863500 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.965791 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.965820 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.965829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.965843 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:24 crc kubenswrapper[4637]: I1201 14:46:24.965852 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:24Z","lastTransitionTime":"2025-12-01T14:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.068533 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.068572 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.068584 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.068602 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.068613 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.170976 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.171032 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.171048 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.171069 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.171083 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.274147 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.274197 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.274331 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.274373 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.274405 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.378476 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.378517 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.378525 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.378541 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.378550 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.481525 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.481597 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.481621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.481650 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.481674 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.584831 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.584870 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.584881 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.584898 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.584910 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.667878 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:25 crc kubenswrapper[4637]: E1201 14:46:25.668049 4637 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:25 crc kubenswrapper[4637]: E1201 14:46:25.668124 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs podName:435e8f74-9c96-4508-b6a6-a1a2280f8176 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:27.668105732 +0000 UTC m=+38.185814560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs") pod "network-metrics-daemon-7w2l8" (UID: "435e8f74-9c96-4508-b6a6-a1a2280f8176") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.687970 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.688016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.688032 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.688051 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.688066 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.770808 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:25 crc kubenswrapper[4637]: E1201 14:46:25.771002 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.791178 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.791213 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.791223 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.791244 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.791254 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.893059 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.893104 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.893116 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.893135 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.893148 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.995562 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.995600 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.995608 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.995621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:25 crc kubenswrapper[4637]: I1201 14:46:25.995630 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:25Z","lastTransitionTime":"2025-12-01T14:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.098336 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.098379 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.098388 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.098403 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.098411 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.200992 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.201027 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.201035 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.201048 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.201057 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.303827 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.304197 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.304296 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.304418 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.304502 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.407177 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.407383 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.407448 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.407513 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.407578 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.510085 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.510133 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.510146 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.510161 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.510173 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.613988 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.614071 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.614084 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.614116 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.614137 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.716734 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.717139 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.717283 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.717431 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.717551 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.770755 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.770833 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.770755 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:26 crc kubenswrapper[4637]: E1201 14:46:26.770924 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:26 crc kubenswrapper[4637]: E1201 14:46:26.771041 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:26 crc kubenswrapper[4637]: E1201 14:46:26.771113 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.820302 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.820334 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.820343 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.820355 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.820365 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.922621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.922660 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.922673 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.922687 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:26 crc kubenswrapper[4637]: I1201 14:46:26.922695 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:26Z","lastTransitionTime":"2025-12-01T14:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.025021 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.025063 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.025074 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.025090 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.025101 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.127548 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.127588 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.127599 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.127615 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.127628 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.230877 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.230984 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.231001 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.231026 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.231042 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.333284 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.333322 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.333333 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.333345 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.333355 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.435852 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.436016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.436037 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.436065 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.436084 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.538592 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.538645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.538659 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.538676 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.538687 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.641123 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.641204 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.641226 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.641254 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.641272 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.687376 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:27 crc kubenswrapper[4637]: E1201 14:46:27.687559 4637 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:27 crc kubenswrapper[4637]: E1201 14:46:27.687632 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs podName:435e8f74-9c96-4508-b6a6-a1a2280f8176 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:31.687611551 +0000 UTC m=+42.205320409 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs") pod "network-metrics-daemon-7w2l8" (UID: "435e8f74-9c96-4508-b6a6-a1a2280f8176") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.745307 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.745411 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.745484 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.745518 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.745580 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.770766 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:27 crc kubenswrapper[4637]: E1201 14:46:27.771007 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.847685 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.847917 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.848070 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.848147 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.848204 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.950851 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.950882 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.950900 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.950916 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:27 crc kubenswrapper[4637]: I1201 14:46:27.950948 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:27Z","lastTransitionTime":"2025-12-01T14:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.053712 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.054021 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.054159 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.054256 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.054336 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.157944 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.157985 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.157997 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.158017 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.158029 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.260987 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.261024 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.261033 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.261046 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.261068 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.363881 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.363916 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.363942 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.363958 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.363968 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.466770 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.467094 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.467174 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.467256 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.467333 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.571030 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.571081 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.571093 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.571111 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.571133 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.673629 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.673661 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.673669 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.673681 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.673694 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.771338 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.771362 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.771338 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:28 crc kubenswrapper[4637]: E1201 14:46:28.771450 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:28 crc kubenswrapper[4637]: E1201 14:46:28.771518 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:28 crc kubenswrapper[4637]: E1201 14:46:28.771574 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.776380 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.776438 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.776465 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.776491 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.776511 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.880121 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.880169 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.880182 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.880202 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.880215 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.982696 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.982747 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.982756 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.982771 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:28 crc kubenswrapper[4637]: I1201 14:46:28.982780 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:28Z","lastTransitionTime":"2025-12-01T14:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.085924 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.085988 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.085999 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.086016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.086028 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.188223 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.188256 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.188267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.188283 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.188294 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.291312 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.291367 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.291379 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.291398 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.291409 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.393688 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.393723 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.393732 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.393747 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.393757 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.496610 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.496652 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.496663 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.496679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.496691 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.599403 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.599441 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.599452 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.599472 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.599485 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.606011 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.606980 4637 scope.go:117] "RemoveContainer" containerID="effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0" Dec 01 14:46:29 crc kubenswrapper[4637]: E1201 14:46:29.607170 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.702236 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.702291 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.702306 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.702329 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.702344 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.771774 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:29 crc kubenswrapper[4637]: E1201 14:46:29.771997 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.788297 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.804564 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.804618 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.804637 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.804659 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.804675 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.816829 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.831697 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.849696 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.865099 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.876599 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.889896 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.904285 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.907087 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.907141 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.907154 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.907171 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.907184 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:29Z","lastTransitionTime":"2025-12-01T14:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.914648 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.930081 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.945770 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.958996 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.974199 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.987786 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:29 crc kubenswrapper[4637]: I1201 14:46:29.999535 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.008896 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.008972 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.009000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.009021 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.009039 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.016114 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:30Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.111747 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.111917 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.111959 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.111983 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.112003 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.213735 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.213770 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.213781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.213798 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.213810 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.318342 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.318387 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.318396 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.318412 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.318421 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.420985 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.421077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.421085 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.421117 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.421128 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.523827 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.523916 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.523949 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.523967 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.523980 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.626000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.626040 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.626052 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.626069 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.626081 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.728496 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.728547 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.728614 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.728645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.728659 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.771227 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.771367 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.771259 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:30 crc kubenswrapper[4637]: E1201 14:46:30.771461 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:30 crc kubenswrapper[4637]: E1201 14:46:30.771600 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:30 crc kubenswrapper[4637]: E1201 14:46:30.771734 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.831674 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.831749 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.831763 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.831779 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.831791 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.933654 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.933702 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.933716 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.933736 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:30 crc kubenswrapper[4637]: I1201 14:46:30.933751 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:30Z","lastTransitionTime":"2025-12-01T14:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.035405 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.035454 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.035471 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.035493 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.035518 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.137417 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.137454 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.137462 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.137479 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.137489 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.239509 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.239542 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.239557 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.239572 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.239585 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.341993 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.342045 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.342055 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.342068 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.342077 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.444272 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.444314 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.444326 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.444545 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.444558 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.546697 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.546728 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.546743 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.546762 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.546772 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.648829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.648859 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.648868 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.648881 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.648890 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.725185 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:31 crc kubenswrapper[4637]: E1201 14:46:31.725332 4637 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:31 crc kubenswrapper[4637]: E1201 14:46:31.725414 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs podName:435e8f74-9c96-4508-b6a6-a1a2280f8176 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:39.725393215 +0000 UTC m=+50.243102053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs") pod "network-metrics-daemon-7w2l8" (UID: "435e8f74-9c96-4508-b6a6-a1a2280f8176") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.751883 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.751952 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.751965 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.751983 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.751997 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.771583 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:31 crc kubenswrapper[4637]: E1201 14:46:31.771726 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.854595 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.854696 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.854731 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.854771 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.854801 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.957861 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.957917 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.957950 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.957978 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:31 crc kubenswrapper[4637]: I1201 14:46:31.957999 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:31Z","lastTransitionTime":"2025-12-01T14:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.060307 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.060392 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.060417 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.060447 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.060467 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.163790 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.163889 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.163915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.164003 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.164040 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.267912 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.268023 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.268046 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.268081 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.268105 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.371610 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.371695 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.371713 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.371741 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.371761 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.474869 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.474927 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.474985 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.475008 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.475025 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.577763 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.577803 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.577813 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.577826 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.577835 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.681046 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.681111 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.681128 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.681153 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.681180 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.770361 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.770389 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:32 crc kubenswrapper[4637]: E1201 14:46:32.770494 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.770361 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:32 crc kubenswrapper[4637]: E1201 14:46:32.770602 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:32 crc kubenswrapper[4637]: E1201 14:46:32.770678 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.784639 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.784766 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.784781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.784846 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.784860 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.887738 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.887771 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.887781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.887796 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.887805 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.990819 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.990870 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.990886 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.990909 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:32 crc kubenswrapper[4637]: I1201 14:46:32.990965 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:32Z","lastTransitionTime":"2025-12-01T14:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.093306 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.093353 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.093368 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.093397 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.093413 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.180495 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.180535 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.180549 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.180566 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.180577 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: E1201 14:46:33.195604 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:33Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.199710 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.199742 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.199753 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.199769 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.199781 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: E1201 14:46:33.213007 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:33Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.217244 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.217279 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.217288 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.217302 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.217331 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: E1201 14:46:33.231070 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:33Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.234137 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.234173 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.234199 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.234213 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.234222 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: E1201 14:46:33.248294 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:33Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.252674 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.252722 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.252736 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.252755 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.252768 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: E1201 14:46:33.267142 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:33Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:33 crc kubenswrapper[4637]: E1201 14:46:33.267365 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.269193 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.269227 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.269239 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.269255 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.269269 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.371800 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.371876 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.371890 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.371907 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.371920 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.474235 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.474301 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.474317 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.474600 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.474635 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.577636 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.577681 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.577691 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.577706 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.577716 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.680900 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.680955 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.680965 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.680981 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.680993 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.770590 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:33 crc kubenswrapper[4637]: E1201 14:46:33.770738 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.782688 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.782732 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.782756 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.782773 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.782787 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.885907 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.885997 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.886014 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.886037 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.886055 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.988610 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.988673 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.988691 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.988716 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:33 crc kubenswrapper[4637]: I1201 14:46:33.988738 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:33Z","lastTransitionTime":"2025-12-01T14:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.092538 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.092644 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.092667 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.092697 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.092723 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.195211 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.195284 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.195307 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.195338 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.195360 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.297924 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.298018 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.298029 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.298045 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.298058 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.400250 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.400295 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.400305 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.400324 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.400335 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.503258 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.503317 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.503334 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.503357 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.503374 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.606095 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.606151 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.606190 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.606219 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.606239 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.708467 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.708498 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.708507 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.708551 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.708561 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.770827 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.770922 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:34 crc kubenswrapper[4637]: E1201 14:46:34.771055 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:34 crc kubenswrapper[4637]: E1201 14:46:34.771139 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.771412 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:34 crc kubenswrapper[4637]: E1201 14:46:34.771619 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.811635 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.811710 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.811736 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.811766 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.811791 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.914410 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.914439 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.914447 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.914459 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:34 crc kubenswrapper[4637]: I1201 14:46:34.914468 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:34Z","lastTransitionTime":"2025-12-01T14:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.017000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.017039 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.017050 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.017064 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.017077 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.119890 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.120032 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.120051 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.120077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.120094 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.222828 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.222966 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.222994 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.223021 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.223044 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.326340 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.326396 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.326409 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.326428 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.326440 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.429544 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.429593 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.429604 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.429620 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.429633 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.532156 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.532226 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.532252 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.532281 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.532303 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.635114 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.635161 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.635173 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.635192 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.635207 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.737977 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.738016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.738027 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.738042 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.738053 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.771031 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:35 crc kubenswrapper[4637]: E1201 14:46:35.771407 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.840499 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.840767 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.840876 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.841007 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.841207 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.943649 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.943973 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.944060 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.944136 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:35 crc kubenswrapper[4637]: I1201 14:46:35.944205 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:35Z","lastTransitionTime":"2025-12-01T14:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.047415 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.047493 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.047516 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.047545 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.047566 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.150550 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.150603 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.150614 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.150629 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.150638 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.253193 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.253348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.253376 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.253431 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.253462 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.356388 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.356486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.356504 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.356530 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.356548 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.459022 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.459088 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.459099 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.459121 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.459137 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.562593 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.563002 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.563172 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.563355 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.563545 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.666594 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.666685 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.666709 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.666739 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.666765 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.769511 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.769551 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.769560 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.769582 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.769593 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.770260 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:36 crc kubenswrapper[4637]: E1201 14:46:36.770426 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.770513 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.770599 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:36 crc kubenswrapper[4637]: E1201 14:46:36.770736 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:36 crc kubenswrapper[4637]: E1201 14:46:36.770880 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.872297 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.872362 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.872384 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.872412 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.872485 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.975619 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.975682 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.975704 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.975732 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:36 crc kubenswrapper[4637]: I1201 14:46:36.975754 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:36Z","lastTransitionTime":"2025-12-01T14:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.078980 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.079072 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.079097 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.079129 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.079153 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.187911 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.187990 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.187999 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.188011 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.188020 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.294703 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.294744 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.294754 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.294776 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.294786 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.397426 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.397470 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.397480 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.397497 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.397509 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.500605 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.500983 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.501185 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.501405 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.501640 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.603996 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.604262 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.604442 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.604568 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.604720 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.707398 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.707466 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.707480 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.707497 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.707510 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.770333 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:37 crc kubenswrapper[4637]: E1201 14:46:37.770483 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.810039 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.810089 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.810101 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.810121 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.810133 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.912144 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.912206 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.912220 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.912235 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:37 crc kubenswrapper[4637]: I1201 14:46:37.912245 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:37Z","lastTransitionTime":"2025-12-01T14:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.014886 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.014949 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.014961 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.014978 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.014994 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.117840 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.117875 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.117885 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.117900 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.117911 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.219920 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.219980 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.219991 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.220009 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.220020 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.322331 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.322396 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.322411 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.322432 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.322448 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.425127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.425192 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.425218 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.425247 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.425268 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.527951 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.527987 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.527998 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.528013 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.528022 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.630451 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.630485 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.630492 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.630507 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.630515 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.732949 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.732996 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.733008 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.733022 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.733031 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.770619 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.770699 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:38 crc kubenswrapper[4637]: E1201 14:46:38.770743 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:38 crc kubenswrapper[4637]: E1201 14:46:38.770808 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.770626 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:38 crc kubenswrapper[4637]: E1201 14:46:38.770881 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.835486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.835528 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.835540 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.835554 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.835565 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.938296 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.938345 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.938360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.938380 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:38 crc kubenswrapper[4637]: I1201 14:46:38.938397 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:38Z","lastTransitionTime":"2025-12-01T14:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.041277 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.041346 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.041369 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.041396 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.041418 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.144166 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.144266 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.144291 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.144360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.144383 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.246674 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.246702 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.246710 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.246723 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.246730 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.350139 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.350240 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.350334 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.350382 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.350399 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.453402 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.453465 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.453477 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.453495 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.453507 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.556460 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.556496 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.556506 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.556521 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.556529 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.658724 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.658820 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.658832 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.658851 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.658863 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.761822 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.761868 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.761885 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.761909 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.761928 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.771214 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:39 crc kubenswrapper[4637]: E1201 14:46:39.771485 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.785379 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.801550 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.812275 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:39 crc kubenswrapper[4637]: E1201 14:46:39.812494 4637 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:39 crc kubenswrapper[4637]: E1201 14:46:39.812588 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs podName:435e8f74-9c96-4508-b6a6-a1a2280f8176 nodeName:}" failed. No retries permitted until 2025-12-01 14:46:55.812564245 +0000 UTC m=+66.330273123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs") pod "network-metrics-daemon-7w2l8" (UID: "435e8f74-9c96-4508-b6a6-a1a2280f8176") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.817334 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.835760 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.854525 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.864976 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.865028 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.865043 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.865063 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.865077 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.870412 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.886647 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.907977 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.927104 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.943632 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.965784 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.967945 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.968081 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.968173 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.968406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.968502 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:39Z","lastTransitionTime":"2025-12-01T14:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.981185 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:39 crc kubenswrapper[4637]: I1201 14:46:39.994737 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:39Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.010396 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:40Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.021285 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:40Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.029616 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:40Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.071294 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.071498 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.071589 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.071685 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.071770 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.174712 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.174775 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.174784 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.174797 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.174809 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.279319 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.279380 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.279414 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.279440 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.279461 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.382593 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.382675 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.382694 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.382719 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.382735 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.484499 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.484539 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.484547 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.484561 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.484609 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.519318 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.519441 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.519478 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.519504 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.519529 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519629 4637 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519629 4637 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519628 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:47:12.519586608 +0000 UTC m=+83.037295536 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519737 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:47:12.519715531 +0000 UTC m=+83.037424369 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519783 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519804 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519845 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519873 4637 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519814 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519963 4637 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.519819 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:47:12.519809854 +0000 UTC m=+83.037518692 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.520021 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 14:47:12.520006719 +0000 UTC m=+83.037715557 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.520034 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 14:47:12.520027859 +0000 UTC m=+83.037736697 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.586644 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.586898 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.586988 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.587059 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.587115 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.689117 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.689216 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.689232 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.689250 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.689266 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.770380 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.770418 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.770564 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.770729 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.770967 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:40 crc kubenswrapper[4637]: E1201 14:46:40.771098 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.792533 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.792626 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.792651 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.792677 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.792699 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.895709 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.895768 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.895784 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.895837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.895857 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.997800 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.997829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.997840 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.997853 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:40 crc kubenswrapper[4637]: I1201 14:46:40.997864 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:40Z","lastTransitionTime":"2025-12-01T14:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.100746 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.100795 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.100813 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.100837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.100853 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.203118 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.203154 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.203164 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.203179 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.203189 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.306570 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.306629 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.306645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.306667 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.306685 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.409648 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.410012 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.410025 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.410043 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.410057 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.511746 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.511808 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.511829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.511858 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.511877 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.614552 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.614603 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.614618 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.614636 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.614650 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.717120 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.717152 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.717162 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.717177 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.717187 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.771039 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:41 crc kubenswrapper[4637]: E1201 14:46:41.771383 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.819284 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.819314 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.819324 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.819339 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.819356 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.921492 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.921559 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.921582 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.921609 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:41 crc kubenswrapper[4637]: I1201 14:46:41.921630 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:41Z","lastTransitionTime":"2025-12-01T14:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.023952 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.023982 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.023991 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.024003 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.024011 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.127439 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.127488 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.127500 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.127521 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.127534 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.147306 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.160718 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.167799 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.184011 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.204719 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.222066 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.231685 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.231820 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.231883 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.232006 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.232082 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.239349 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.259830 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.277336 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.291790 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.303068 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.316614 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.330074 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.335008 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.335056 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.335089 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.335115 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.335133 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.348662 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.362672 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.384463 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.405549 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.421914 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:42Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.437912 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.437969 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.437982 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.438000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.438012 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.541502 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.542224 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.542307 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.542420 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.542501 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.646715 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.646760 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.646777 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.646797 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.646813 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.749897 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.749973 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.749986 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.750003 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.750015 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.771311 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.771322 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.771366 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:42 crc kubenswrapper[4637]: E1201 14:46:42.771417 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:42 crc kubenswrapper[4637]: E1201 14:46:42.771507 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:42 crc kubenswrapper[4637]: E1201 14:46:42.771684 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.857546 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.857599 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.857614 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.857634 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.857649 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.959587 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.959871 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.960010 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.960085 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:42 crc kubenswrapper[4637]: I1201 14:46:42.960179 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:42Z","lastTransitionTime":"2025-12-01T14:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.062180 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.062219 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.062227 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.062250 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.062268 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.164218 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.164259 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.164269 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.164282 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.164291 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.266000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.266062 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.266071 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.266085 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.266094 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.368577 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.368674 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.368684 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.368697 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.368705 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.471866 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.471997 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.472025 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.472059 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.472081 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.496642 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.496700 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.496719 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.496738 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.496751 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: E1201 14:46:43.511853 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:43Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.516313 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.516360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.516372 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.516389 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.516402 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: E1201 14:46:43.535794 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:43Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.539746 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.539801 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.539819 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.539846 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.539864 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: E1201 14:46:43.552993 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:43Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.556343 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.556393 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.556406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.556426 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.556440 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: E1201 14:46:43.569981 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:43Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.575707 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.575759 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.575772 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.575794 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.575808 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: E1201 14:46:43.588806 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:43Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:43 crc kubenswrapper[4637]: E1201 14:46:43.589010 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.590389 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.590441 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.590454 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.590470 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.590482 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.693692 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.693738 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.693749 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.693764 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.693776 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.770673 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:43 crc kubenswrapper[4637]: E1201 14:46:43.771120 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.771403 4637 scope.go:117] "RemoveContainer" containerID="effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.796690 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.796737 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.796751 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.796772 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.796788 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.901258 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.901325 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.901345 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.901430 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:43 crc kubenswrapper[4637]: I1201 14:46:43.901459 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:43Z","lastTransitionTime":"2025-12-01T14:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.004536 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.004567 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.004576 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.004588 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.004603 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.093751 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/1.log" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.096661 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.097146 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.107178 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.107229 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.107238 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.107254 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.107267 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.112510 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.127823 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.151901 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.181490 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.200804 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.210084 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.210122 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.210133 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.210150 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.210163 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.212088 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.223288 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.238036 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.249280 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.261266 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.279565 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.289698 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.299697 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.312217 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.312249 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.312275 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.312287 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.312296 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.313559 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.325912 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.338956 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.360306 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:44Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.414839 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.414875 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.414883 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.414895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.414905 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.517836 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.517871 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.517883 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.517900 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.517913 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.620783 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.620835 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.620848 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.620870 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.620881 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.723978 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.724014 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.724027 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.724043 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.724054 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.771068 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.771120 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.771132 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:44 crc kubenswrapper[4637]: E1201 14:46:44.771268 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:44 crc kubenswrapper[4637]: E1201 14:46:44.771370 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:44 crc kubenswrapper[4637]: E1201 14:46:44.771496 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.826802 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.826842 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.826853 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.826867 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.826877 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.929081 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.929119 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.929131 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.929145 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:44 crc kubenswrapper[4637]: I1201 14:46:44.929157 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:44Z","lastTransitionTime":"2025-12-01T14:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.032435 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.032474 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.032485 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.032499 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.032509 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.101543 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/2.log" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.102294 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/1.log" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.105679 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3" exitCode=1 Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.105744 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.105789 4637 scope.go:117] "RemoveContainer" containerID="effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.106567 4637 scope.go:117] "RemoveContainer" containerID="affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3" Dec 01 14:46:45 crc kubenswrapper[4637]: E1201 14:46:45.106823 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.123397 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.135081 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.135138 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.135155 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.135173 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.135184 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.139251 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.150168 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.168398 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://effdb0409693d51eee3b6b68c57da569325ebaad6c7f9e3042c9f030ed2cbae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:21Z\\\",\\\"message\\\":\\\"leted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI1201 14:46:20.939354 5971 services_controller.go:360] Finished syncing service router-internal-default on namespace openshift-ingress for network=default : 2.756165ms\\\\nI1201 14:46:20.939432 5971 obj_retry.go:551] Creating *factory.egressNode crc took: 10.929468ms\\\\nI1201 14:46:20.939456 5971 factory.go:1336] Added *v1.Node event handler 7\\\\nI1201 14:46:20.939486 5971 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1201 14:46:20.939540 5971 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 14:46:20.939575 5971 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 14:46:20.939614 5971 factory.go:656] Stopping watch factory\\\\nI1201 14:46:20.939646 5971 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 14:46:20.939677 5971 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 14:46:20.939723 5971 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 14:46:20.939818 5971 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 14:46:20.939847 5971 ovnkube.go:599] Stopped ovnkube\\\\nI1201 14:46:20.939873 5971 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:46:20.939966 5971 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.180951 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.194256 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.205974 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.221406 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.233770 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.237476 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.237515 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.237527 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.237545 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.237556 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.243405 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.252205 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.264745 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.282462 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.295681 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.306760 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.315664 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.326236 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:45Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.339576 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.339687 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.339744 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.339815 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.339870 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.442534 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.442585 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.442596 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.442610 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.442620 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.545900 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.546081 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.546104 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.546127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.546145 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.648565 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.648624 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.648640 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.648659 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.648674 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.750718 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.750759 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.750769 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.750783 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.750794 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.770688 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:45 crc kubenswrapper[4637]: E1201 14:46:45.770922 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.853736 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.853789 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.853804 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.853823 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.853839 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.957991 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.958047 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.958059 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.958074 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:45 crc kubenswrapper[4637]: I1201 14:46:45.958083 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:45Z","lastTransitionTime":"2025-12-01T14:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.060816 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.060876 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.060886 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.060900 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.060909 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.110670 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/2.log" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.113487 4637 scope.go:117] "RemoveContainer" containerID="affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3" Dec 01 14:46:46 crc kubenswrapper[4637]: E1201 14:46:46.115675 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.126791 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.135632 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.146173 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.157394 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.162850 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.162892 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.162900 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.162915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.162952 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.167778 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.178378 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.188906 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.201415 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.218844 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.239839 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.258222 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.265635 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.265683 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.265698 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.265714 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.265725 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.270133 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.280428 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.293290 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.309078 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.318726 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.329443 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:46Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.373504 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.373543 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.373554 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.373568 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.373581 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.476312 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.476348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.476357 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.476371 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.476380 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.578640 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.578683 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.578694 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.578709 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.578721 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.681578 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.681663 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.681684 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.681707 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.681724 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.771222 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:46 crc kubenswrapper[4637]: E1201 14:46:46.771390 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.771664 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:46 crc kubenswrapper[4637]: E1201 14:46:46.771764 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.772178 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:46 crc kubenswrapper[4637]: E1201 14:46:46.772333 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.784041 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.784075 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.784084 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.784097 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.784107 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.887000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.887075 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.887098 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.887127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.887150 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.990527 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.990579 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.990595 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.990614 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:46 crc kubenswrapper[4637]: I1201 14:46:46.990629 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:46Z","lastTransitionTime":"2025-12-01T14:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.093251 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.093299 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.093313 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.093332 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.093348 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.195029 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.195071 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.195084 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.195101 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.195116 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.297585 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.297629 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.297639 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.297654 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.297667 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.399869 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.399904 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.399914 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.399949 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.399987 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.502998 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.503039 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.503054 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.503069 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.503078 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.606334 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.606381 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.606393 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.606411 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.606426 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.708993 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.709038 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.709055 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.709077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.709095 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.771037 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:47 crc kubenswrapper[4637]: E1201 14:46:47.771238 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.812076 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.812110 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.812119 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.812132 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.812143 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.914824 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.914870 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.914880 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.914895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:47 crc kubenswrapper[4637]: I1201 14:46:47.914907 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:47Z","lastTransitionTime":"2025-12-01T14:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.017800 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.017840 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.017851 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.017869 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.017880 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.119916 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.120016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.120031 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.120050 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.120065 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.222260 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.222295 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.222303 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.222318 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.222327 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.325434 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.325503 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.325522 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.325547 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.325565 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.437374 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.437433 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.437456 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.437485 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.437507 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.540915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.541009 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.541030 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.541052 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.541068 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.643550 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.643629 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.643667 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.643697 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.643719 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.747353 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.747414 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.747431 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.747455 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.747472 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.770892 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:48 crc kubenswrapper[4637]: E1201 14:46:48.771080 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.771116 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.771162 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:48 crc kubenswrapper[4637]: E1201 14:46:48.771311 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:48 crc kubenswrapper[4637]: E1201 14:46:48.771464 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.850071 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.850136 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.850148 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.850166 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.850201 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.952250 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.952319 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.952341 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.952368 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:48 crc kubenswrapper[4637]: I1201 14:46:48.952387 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:48Z","lastTransitionTime":"2025-12-01T14:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.055283 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.055330 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.055342 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.055363 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.055377 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.157572 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.157615 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.157654 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.157692 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.157708 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.260367 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.260440 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.260474 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.260506 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.260527 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.362915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.362983 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.362998 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.363014 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.363029 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.465511 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.465545 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.465552 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.465565 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.465574 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.568117 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.568151 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.568161 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.568176 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.568187 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.671132 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.671175 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.671183 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.671197 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.671206 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.770837 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:49 crc kubenswrapper[4637]: E1201 14:46:49.771178 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.776054 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.776124 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.776138 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.776157 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.776238 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.786036 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.803298 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.815174 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.826057 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.839771 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.853141 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.866456 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.876318 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.878053 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.878101 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.878115 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.878131 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.878141 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.887390 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.897304 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.909045 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.921618 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.932780 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.941860 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.954369 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.964354 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.975071 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:49Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.980769 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.980795 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.980803 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.980832 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:49 crc kubenswrapper[4637]: I1201 14:46:49.980841 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:49Z","lastTransitionTime":"2025-12-01T14:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.083010 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.083035 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.083043 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.083055 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.083063 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.184785 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.184828 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.184838 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.184850 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.184859 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.286751 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.286783 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.286792 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.286805 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.286814 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.388835 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.388865 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.388878 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.388894 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.388903 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.491692 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.491735 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.491747 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.491770 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.491782 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.594782 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.594850 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.594872 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.594900 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.594921 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.697514 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.697558 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.697568 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.697582 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.697592 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.771048 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:50 crc kubenswrapper[4637]: E1201 14:46:50.771174 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.771066 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:50 crc kubenswrapper[4637]: E1201 14:46:50.771237 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.771055 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:50 crc kubenswrapper[4637]: E1201 14:46:50.771294 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.800281 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.800321 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.800332 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.800346 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.800356 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.902316 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.902362 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.902375 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.902392 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:50 crc kubenswrapper[4637]: I1201 14:46:50.902404 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:50Z","lastTransitionTime":"2025-12-01T14:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.005123 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.005169 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.005182 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.005201 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.005214 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.108308 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.108360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.108372 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.108388 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.108398 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.210459 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.210499 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.210511 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.210527 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.210538 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.312948 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.312975 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.312982 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.312994 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.313002 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.415569 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.415707 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.415725 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.415749 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.415769 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.517883 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.517996 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.518020 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.518046 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.518063 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.620273 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.620589 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.620695 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.620796 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.620904 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.724183 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.724231 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.724243 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.724260 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.724272 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.770700 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:51 crc kubenswrapper[4637]: E1201 14:46:51.770860 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.826569 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.827227 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.827313 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.827389 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.827479 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.930095 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.930178 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.930197 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.930221 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:51 crc kubenswrapper[4637]: I1201 14:46:51.930239 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:51Z","lastTransitionTime":"2025-12-01T14:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.033586 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.033663 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.033689 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.033717 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.033739 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.136712 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.136788 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.136809 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.136837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.136854 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.240502 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.240577 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.240601 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.240631 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.240659 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.342794 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.342836 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.342847 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.342871 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.342898 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.445842 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.445895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.445909 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.445951 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.445968 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.548740 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.548792 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.548803 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.548822 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.548838 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.651411 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.651448 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.651459 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.651474 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.651485 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.754387 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.754447 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.754459 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.754729 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.754747 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.770661 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.770716 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.770772 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:52 crc kubenswrapper[4637]: E1201 14:46:52.771036 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:52 crc kubenswrapper[4637]: E1201 14:46:52.771263 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:52 crc kubenswrapper[4637]: E1201 14:46:52.771351 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.856843 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.856915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.856979 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.857002 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.857014 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.959103 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.959130 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.959140 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.959155 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:52 crc kubenswrapper[4637]: I1201 14:46:52.959165 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:52Z","lastTransitionTime":"2025-12-01T14:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.060894 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.060944 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.060955 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.060970 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.060981 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.163274 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.163325 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.163342 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.163360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.163371 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.266034 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.266106 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.266120 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.266138 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.266180 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.368625 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.368661 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.368671 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.368685 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.368696 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.470578 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.470619 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.470628 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.470643 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.470653 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.572378 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.572431 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.572444 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.572460 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.572839 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.674674 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.674709 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.674719 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.674731 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.674740 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.771332 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:53 crc kubenswrapper[4637]: E1201 14:46:53.771483 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.777946 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.777983 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.777995 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.778010 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.778025 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.825479 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.825513 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.825522 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.825538 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.825548 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: E1201 14:46:53.837333 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:53Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.840657 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.840685 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.840694 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.840707 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.840716 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: E1201 14:46:53.852052 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:53Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.855218 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.855267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.855279 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.855296 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.855307 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: E1201 14:46:53.867355 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:53Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.871455 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.871478 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.871486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.871500 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.871508 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: E1201 14:46:53.882084 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:53Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.885267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.885297 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.885308 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.885351 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.885371 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:53 crc kubenswrapper[4637]: E1201 14:46:53.897359 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:53Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:53 crc kubenswrapper[4637]: E1201 14:46:53.897628 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.898994 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.899021 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.899030 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.899042 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:53 crc kubenswrapper[4637]: I1201 14:46:53.899051 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:53Z","lastTransitionTime":"2025-12-01T14:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.000540 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.000565 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.000572 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.000584 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.000593 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.102858 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.102899 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.102911 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.102940 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.102966 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.204898 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.204960 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.204972 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.204989 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.205001 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.306880 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.306914 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.306923 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.306953 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.306962 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.409353 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.409678 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.409780 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.409879 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.409991 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.512715 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.512779 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.512797 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.512826 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.512846 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.615399 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.615438 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.615451 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.615468 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.615482 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.718754 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.718797 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.718810 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.718824 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.718837 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.770340 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.770362 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:54 crc kubenswrapper[4637]: E1201 14:46:54.770495 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.770380 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:54 crc kubenswrapper[4637]: E1201 14:46:54.770620 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:54 crc kubenswrapper[4637]: E1201 14:46:54.770635 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.821106 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.821142 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.821153 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.821181 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.821196 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.923040 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.923075 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.923086 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.923102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:54 crc kubenswrapper[4637]: I1201 14:46:54.923112 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:54Z","lastTransitionTime":"2025-12-01T14:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.024976 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.025019 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.025029 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.025044 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.025053 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.126829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.126858 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.126867 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.126880 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.126890 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.231111 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.231166 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.231179 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.231196 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.231209 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.333786 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.333895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.333913 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.333977 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.333995 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.436917 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.436986 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.436996 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.437011 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.437021 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.539882 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.539981 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.539998 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.540016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.540028 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.642534 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.642597 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.642615 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.642639 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.642656 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.745972 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.746006 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.746014 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.746028 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.746036 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.771319 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:55 crc kubenswrapper[4637]: E1201 14:46:55.771633 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.848706 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.848757 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.848768 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.848784 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.848794 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.887394 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:55 crc kubenswrapper[4637]: E1201 14:46:55.887522 4637 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:55 crc kubenswrapper[4637]: E1201 14:46:55.887566 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs podName:435e8f74-9c96-4508-b6a6-a1a2280f8176 nodeName:}" failed. No retries permitted until 2025-12-01 14:47:27.887552726 +0000 UTC m=+98.405261554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs") pod "network-metrics-daemon-7w2l8" (UID: "435e8f74-9c96-4508-b6a6-a1a2280f8176") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.950795 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.950833 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.950842 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.950854 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:55 crc kubenswrapper[4637]: I1201 14:46:55.950865 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:55Z","lastTransitionTime":"2025-12-01T14:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.053604 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.053670 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.053686 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.053714 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.053732 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.156522 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.156570 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.156581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.156598 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.156610 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.259517 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.259629 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.259663 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.259692 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.259712 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.362202 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.362244 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.362253 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.362267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.362276 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.464338 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.464383 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.464392 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.464405 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.464416 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.567102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.567146 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.567155 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.567171 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.567181 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.669629 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.669660 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.669668 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.669681 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.669690 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.770232 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.770262 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.770261 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:56 crc kubenswrapper[4637]: E1201 14:46:56.770349 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:56 crc kubenswrapper[4637]: E1201 14:46:56.770637 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:56 crc kubenswrapper[4637]: E1201 14:46:56.770848 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.771679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.771712 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.771725 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.771740 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.771751 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.873688 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.873742 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.873755 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.873772 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.873785 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.976101 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.976161 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.976173 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.976191 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:56 crc kubenswrapper[4637]: I1201 14:46:56.976203 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:56Z","lastTransitionTime":"2025-12-01T14:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.078555 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.078594 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.078605 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.078625 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.078641 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:57Z","lastTransitionTime":"2025-12-01T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.144960 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/0.log" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.145003 4637 generic.go:334] "Generic (PLEG): container finished" podID="f64d8237-8116-4742-8d7f-9f6e8018e4c2" containerID="837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da" exitCode=1 Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.145027 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n2brl" event={"ID":"f64d8237-8116-4742-8d7f-9f6e8018e4c2","Type":"ContainerDied","Data":"837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.145357 4637 scope.go:117] "RemoveContainer" containerID="837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.155443 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.173888 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.185548 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.185580 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.185591 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.185607 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.185617 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:57Z","lastTransitionTime":"2025-12-01T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.186277 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.205213 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.216737 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.228716 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.241294 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.254359 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.275349 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.287570 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.288240 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.288352 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.288416 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.288478 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.288537 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:57Z","lastTransitionTime":"2025-12-01T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.299189 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.311615 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.324411 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.334178 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.344023 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.354239 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.365170 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:57Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.391152 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.391180 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.391189 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.391204 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.391231 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:57Z","lastTransitionTime":"2025-12-01T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.493263 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.493297 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.493307 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.493321 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.493348 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:57Z","lastTransitionTime":"2025-12-01T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.595683 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.595709 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.595716 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.595728 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.595738 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:57Z","lastTransitionTime":"2025-12-01T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.698086 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.698161 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.698172 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.698188 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.698197 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:57Z","lastTransitionTime":"2025-12-01T14:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:57 crc kubenswrapper[4637]: I1201 14:46:57.770493 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:57 crc kubenswrapper[4637]: E1201 14:46:57.770629 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.071838 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.071874 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.071885 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.071902 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.071912 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.149283 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/0.log" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.149330 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n2brl" event={"ID":"f64d8237-8116-4742-8d7f-9f6e8018e4c2","Type":"ContainerStarted","Data":"a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.161460 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.171063 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.173276 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.173299 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.173307 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.173318 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.173327 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.182835 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.192082 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.200980 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.209896 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.221012 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.231216 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.241387 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.257678 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.271595 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.279501 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.279524 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.279534 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.279547 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.279555 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.284006 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.296573 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.307378 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.319317 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.328474 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.340295 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:58Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.381917 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.381974 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.381986 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.382000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.382011 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.484268 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.484308 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.484321 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.484337 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.484348 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.586600 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.586628 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.586636 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.586647 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.586657 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.688874 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.688917 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.688943 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.688957 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.688967 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.770822 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.770865 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:46:58 crc kubenswrapper[4637]: E1201 14:46:58.771176 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:46:58 crc kubenswrapper[4637]: E1201 14:46:58.771303 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.770864 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:46:58 crc kubenswrapper[4637]: E1201 14:46:58.771434 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.782449 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.791157 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.791189 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.791198 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.791212 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.791221 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.893188 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.893442 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.893507 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.893574 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.893662 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.996306 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.996345 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.996354 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.996370 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:58 crc kubenswrapper[4637]: I1201 14:46:58.996381 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:58Z","lastTransitionTime":"2025-12-01T14:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.099218 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.099763 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.099846 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.099916 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.100032 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.202335 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.202371 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.202382 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.202395 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.202403 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.304710 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.304950 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.305111 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.305195 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.305224 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.407583 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.407640 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.407649 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.407664 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.407673 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.509411 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.509452 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.509462 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.509478 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.509489 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.611968 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.612011 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.612050 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.612069 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.612082 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.714383 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.714422 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.714431 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.714444 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.714454 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.771345 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:46:59 crc kubenswrapper[4637]: E1201 14:46:59.771631 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.783646 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.798100 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.812692 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.817096 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.817148 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.817160 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.817178 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.817190 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.841778 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.854283 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.864414 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.875891 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.887370 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.899438 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.909239 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.919834 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.919872 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.919883 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.919898 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.919910 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:46:59Z","lastTransitionTime":"2025-12-01T14:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.920331 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.934705 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.947728 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.957778 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.969031 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.977732 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee01a4e-140a-47fb-bae0-6dc6088e3a18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226ba1817928934b4987c768dc91c2b010d39f84323f2d84555a9e3418e1563b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.988028 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:46:59 crc kubenswrapper[4637]: I1201 14:46:59.997598 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:46:59Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.022428 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.022475 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.022487 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.022503 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.022519 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.125058 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.125097 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.125106 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.125119 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.125128 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.227296 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.227341 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.227352 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.227370 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.227381 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.329090 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.329131 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.329142 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.329160 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.329171 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.431022 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.431084 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.431103 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.431127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.431143 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.533584 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.533637 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.533655 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.533680 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.533697 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.635162 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.635194 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.635203 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.635215 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.635223 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.737461 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.737530 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.737540 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.737554 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.737563 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.771213 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.771247 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.771283 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:00 crc kubenswrapper[4637]: E1201 14:47:00.771344 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:00 crc kubenswrapper[4637]: E1201 14:47:00.771411 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:00 crc kubenswrapper[4637]: E1201 14:47:00.771522 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.772144 4637 scope.go:117] "RemoveContainer" containerID="affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3" Dec 01 14:47:00 crc kubenswrapper[4637]: E1201 14:47:00.772288 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.840505 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.840537 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.840546 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.840560 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.840569 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.943987 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.944024 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.944034 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.944050 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:00 crc kubenswrapper[4637]: I1201 14:47:00.944060 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:00Z","lastTransitionTime":"2025-12-01T14:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.046781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.046837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.046846 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.046858 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.046867 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.149297 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.149337 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.149349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.149364 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.149374 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.252067 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.252101 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.252113 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.252128 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.252139 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.353839 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.353880 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.353895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.353912 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.353922 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.456291 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.456338 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.456349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.456368 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.456378 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.558873 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.558916 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.558940 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.558957 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.558967 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.661556 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.661595 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.661606 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.661621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.661633 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.764710 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.764753 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.764764 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.764781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.764791 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.770805 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:01 crc kubenswrapper[4637]: E1201 14:47:01.770925 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.867594 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.867631 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.867641 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.867656 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.867667 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.970089 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.970137 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.970147 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.970163 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:01 crc kubenswrapper[4637]: I1201 14:47:01.970174 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:01Z","lastTransitionTime":"2025-12-01T14:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.072705 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.072752 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.072766 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.072780 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.072791 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.176301 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.176351 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.176368 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.176392 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.176409 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.278013 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.278043 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.278050 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.278063 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.278071 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.380996 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.381034 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.381042 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.381057 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.381067 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.485164 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.485198 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.485209 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.485224 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.485235 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.588498 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.588540 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.588552 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.588569 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.588581 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.691858 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.691906 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.691917 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.691963 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.691980 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.771249 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.771256 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.771373 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:02 crc kubenswrapper[4637]: E1201 14:47:02.771596 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:02 crc kubenswrapper[4637]: E1201 14:47:02.771724 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:02 crc kubenswrapper[4637]: E1201 14:47:02.771845 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.794533 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.794581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.794593 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.794612 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.794624 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.896740 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.896823 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.896838 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.896855 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.896865 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.999076 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.999138 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.999151 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.999170 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:02 crc kubenswrapper[4637]: I1201 14:47:02.999201 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:02Z","lastTransitionTime":"2025-12-01T14:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.133413 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.133452 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.133462 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.133479 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.133492 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.236116 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.236199 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.236211 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.236227 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.236236 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.339528 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.339566 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.339574 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.339590 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.339598 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.442582 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.442659 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.442683 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.442714 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.442743 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.545909 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.546346 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.546555 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.546756 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.547037 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.649370 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.649401 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.649425 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.649439 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.649448 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.752503 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.752541 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.752551 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.752566 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.752575 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.771358 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:03 crc kubenswrapper[4637]: E1201 14:47:03.771501 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.855110 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.855136 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.855143 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.855156 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.855164 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.958299 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.958348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.958359 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.958377 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:03 crc kubenswrapper[4637]: I1201 14:47:03.958391 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:03Z","lastTransitionTime":"2025-12-01T14:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.060894 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.060981 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.061000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.061026 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.061044 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.163818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.163857 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.163867 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.163882 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.163891 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.259678 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.259763 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.259795 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.259825 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.259845 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.275539 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:04Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.279754 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.279886 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.279970 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.280033 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.280094 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.302428 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:04Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.308416 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.308472 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.308483 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.308501 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.308514 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.329115 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:04Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.332395 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.332417 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.332425 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.332438 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.332446 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.348979 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:04Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.352828 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.352961 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.353025 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.353102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.353165 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.367517 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:04Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.367664 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.369296 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.369345 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.369356 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.369370 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.369380 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.472553 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.472606 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.472621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.472642 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.472657 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.575041 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.575096 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.575120 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.575144 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.575159 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.677908 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.677968 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.677979 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.677993 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.678003 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.770583 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.770720 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.770770 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.770813 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.770854 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:04 crc kubenswrapper[4637]: E1201 14:47:04.771048 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.780426 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.780458 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.780470 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.780486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.780498 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.882882 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.882990 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.883012 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.883037 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.883054 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.984918 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.985012 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.985032 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.985063 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:04 crc kubenswrapper[4637]: I1201 14:47:04.985085 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:04Z","lastTransitionTime":"2025-12-01T14:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.087285 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.087348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.087373 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.087419 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.087443 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.191074 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.191127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.191144 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.191196 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.191216 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.294834 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.294887 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.294903 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.294926 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.294972 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.397692 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.397757 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.397779 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.397808 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.397829 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.500696 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.500733 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.500743 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.500758 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.500769 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.603344 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.603406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.603430 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.603458 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.603481 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.706810 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.706867 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.706888 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.706916 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.706977 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.770731 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:05 crc kubenswrapper[4637]: E1201 14:47:05.770844 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.809218 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.809309 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.809337 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.809388 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.809410 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.912787 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.912836 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.912856 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.912881 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:05 crc kubenswrapper[4637]: I1201 14:47:05.912897 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:05Z","lastTransitionTime":"2025-12-01T14:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.015808 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.015837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.015848 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.015861 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.015872 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.118504 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.118545 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.118556 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.118574 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.118586 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.221484 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.221533 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.221542 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.221555 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.221564 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.324848 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.324914 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.324986 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.325018 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.325038 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.427282 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.427328 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.427356 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.427373 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.427386 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.529669 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.529717 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.529729 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.529746 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.529759 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.631840 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.631894 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.631905 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.631925 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.631953 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.736965 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.737014 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.737026 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.737044 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.737060 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.770316 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.770353 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.770425 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:06 crc kubenswrapper[4637]: E1201 14:47:06.770453 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:06 crc kubenswrapper[4637]: E1201 14:47:06.770588 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:06 crc kubenswrapper[4637]: E1201 14:47:06.770695 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.838984 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.839035 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.839048 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.839065 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.839082 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.941403 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.941436 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.941445 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.941459 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:06 crc kubenswrapper[4637]: I1201 14:47:06.941469 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:06Z","lastTransitionTime":"2025-12-01T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.043790 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.043818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.043829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.043846 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.043856 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.145818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.145851 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.145867 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.145884 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.145894 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.248188 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.248219 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.248227 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.248239 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.248247 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.350362 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.350392 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.350399 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.350411 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.350419 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.453300 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.453343 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.453353 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.453369 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.453379 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.555501 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.555551 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.555564 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.555581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.555593 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.658647 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.658691 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.658703 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.658720 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.658732 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.760961 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.760995 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.761006 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.761023 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.761035 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.770328 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:07 crc kubenswrapper[4637]: E1201 14:47:07.770453 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.863808 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.863856 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.863868 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.863887 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.863900 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.966384 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.966425 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.966436 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.966451 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:07 crc kubenswrapper[4637]: I1201 14:47:07.966643 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:07Z","lastTransitionTime":"2025-12-01T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.069286 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.069312 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.069319 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.069348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.069358 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.174893 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.174919 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.174941 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.174960 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.174975 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.276737 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.276781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.276794 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.276810 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.276823 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.379153 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.379191 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.379204 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.379220 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.379231 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.481161 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.481195 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.481204 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.481218 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.481226 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.583645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.583687 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.583698 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.583739 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.583750 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.686360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.686425 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.686443 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.686466 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.686484 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.771065 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.771122 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:08 crc kubenswrapper[4637]: E1201 14:47:08.771195 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.771065 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:08 crc kubenswrapper[4637]: E1201 14:47:08.771290 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:08 crc kubenswrapper[4637]: E1201 14:47:08.771398 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.789297 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.789319 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.789328 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.789340 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.789349 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.891947 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.891984 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.891994 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.892011 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.892020 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.993638 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.993688 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.993704 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.993724 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:08 crc kubenswrapper[4637]: I1201 14:47:08.993739 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:08Z","lastTransitionTime":"2025-12-01T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.095825 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.095862 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.095889 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.095917 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.095926 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.198002 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.198037 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.198046 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.198059 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.198068 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.300518 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.300590 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.300598 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.300627 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.300637 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.407278 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.407340 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.407357 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.407379 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.407393 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.510483 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.510532 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.510544 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.510561 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.510574 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.612574 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.612877 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.613014 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.613214 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.613396 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.715740 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.716041 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.716149 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.716241 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.716327 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.771278 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:09 crc kubenswrapper[4637]: E1201 14:47:09.772146 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.792353 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.812333 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.824494 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.824995 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.825118 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.825227 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.825461 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.825919 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.840958 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.855068 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.865361 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.875637 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.887649 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.898893 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.910057 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.919145 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee01a4e-140a-47fb-bae0-6dc6088e3a18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226ba1817928934b4987c768dc91c2b010d39f84323f2d84555a9e3418e1563b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.927551 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.927583 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.927590 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.927602 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.927610 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:09Z","lastTransitionTime":"2025-12-01T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.932958 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.949408 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.959324 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.970870 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:09 crc kubenswrapper[4637]: I1201 14:47:09.996456 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:09Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.015490 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.031282 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.031315 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.031325 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.031352 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.031361 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.041270 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:10Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.133904 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.133962 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.133971 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.133984 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.133992 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.236198 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.236277 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.236287 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.236302 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.236311 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.338068 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.338110 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.338122 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.338137 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.338149 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.440380 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.440417 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.440428 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.440441 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.440452 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.542621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.542651 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.542659 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.542670 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.542680 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.644690 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.644728 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.644740 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.644755 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.644765 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.746415 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.746466 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.746494 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.746508 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.746518 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.771291 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.771330 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.771337 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:10 crc kubenswrapper[4637]: E1201 14:47:10.772189 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:10 crc kubenswrapper[4637]: E1201 14:47:10.772382 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:10 crc kubenswrapper[4637]: E1201 14:47:10.772413 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.849177 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.849211 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.849219 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.849231 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.849240 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.954351 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.954818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.955127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.955489 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:10 crc kubenswrapper[4637]: I1201 14:47:10.955892 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:10Z","lastTransitionTime":"2025-12-01T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.059127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.059154 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.059161 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.059174 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.059183 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.161474 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.161505 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.161513 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.161526 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.161534 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.264103 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.264148 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.264161 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.264177 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.264190 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.366804 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.366851 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.366864 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.366884 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.366968 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.469426 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.470098 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.470121 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.470427 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.470449 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.573021 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.573060 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.573068 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.573080 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.573089 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.675582 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.675621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.675635 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.675652 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.675666 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.771155 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:11 crc kubenswrapper[4637]: E1201 14:47:11.771375 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.778049 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.778110 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.778133 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.778160 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.778182 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.882016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.882066 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.882081 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.882102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.882118 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.984740 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.985212 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.985348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.985485 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:11 crc kubenswrapper[4637]: I1201 14:47:11.985684 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:11Z","lastTransitionTime":"2025-12-01T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.088023 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.088249 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.088388 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.088515 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.088632 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.191505 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.191775 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.191868 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.192016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.192141 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.295661 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.296038 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.296116 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.296182 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.296243 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.398526 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.398815 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.398995 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.399122 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.399224 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.501271 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.502064 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.502082 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.502102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.502114 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.561154 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.561317 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.561360 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561432 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.5614026 +0000 UTC m=+147.079111468 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561502 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561526 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561545 4637 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561584 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561622 4637 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561599 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.561583004 +0000 UTC m=+147.079291862 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.561514 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561638 4637 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561695 4637 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561696 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.561674497 +0000 UTC m=+147.079383365 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.561848 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.561918 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.561899243 +0000 UTC m=+147.079608111 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.562031 4637 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.562126 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.562102338 +0000 UTC m=+147.079811216 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.605997 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.606346 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.606533 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.606726 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.607001 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.710511 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.710886 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.711132 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.711302 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.711446 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.771386 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.771396 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.771485 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.772060 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.772202 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:12 crc kubenswrapper[4637]: E1201 14:47:12.772584 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.814187 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.814229 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.814237 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.814251 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.814261 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.916301 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.916328 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.916335 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.916348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:12 crc kubenswrapper[4637]: I1201 14:47:12.916356 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:12Z","lastTransitionTime":"2025-12-01T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.019654 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.020012 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.020302 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.020497 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.020775 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.128771 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.128832 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.128842 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.128857 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.128866 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.230869 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.230921 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.230962 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.230976 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.230986 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.333761 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.333806 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.333815 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.333830 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.333840 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.438787 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.439106 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.439115 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.439131 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.439141 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.541346 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.541387 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.541398 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.541646 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.541669 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.644491 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.644549 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.644566 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.644588 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.644601 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.747546 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.747593 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.747607 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.747623 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.747634 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.771151 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:13 crc kubenswrapper[4637]: E1201 14:47:13.771288 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.772354 4637 scope.go:117] "RemoveContainer" containerID="affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.849688 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.849718 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.849727 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.849739 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.849748 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.953042 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.953088 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.953102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.953123 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:13 crc kubenswrapper[4637]: I1201 14:47:13.953137 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:13Z","lastTransitionTime":"2025-12-01T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.055764 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.055817 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.055829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.055846 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.055857 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.158165 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.158204 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.158215 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.158229 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.158239 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.194427 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/2.log" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.196719 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.197390 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.212382 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.227113 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.237310 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.247673 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.257909 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.260584 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.260631 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.260640 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.260653 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.260662 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.270675 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.284897 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee01a4e-140a-47fb-bae0-6dc6088e3a18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226ba1817928934b4987c768dc91c2b010d39f84323f2d84555a9e3418e1563b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.297205 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.316093 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.329200 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.344092 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.355752 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.362178 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.362230 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.362239 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.362253 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.362263 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.371217 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.385066 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.394909 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.399224 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.399257 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.399265 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.399280 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.399288 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.409979 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.414173 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.417151 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.417174 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.417181 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.417193 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.417201 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.422793 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.430424 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.432657 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.433515 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.433552 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.433564 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.433581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.433592 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.446008 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.448880 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.448915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.448925 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.448961 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.448974 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.462139 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.465447 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.465482 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.465493 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.465510 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.465522 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.478424 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.478582 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.480151 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.480180 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.480192 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.480207 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.480218 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.582155 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.582193 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.582204 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.582219 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.582230 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.684065 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.684141 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.684157 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.684171 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.684181 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.771388 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.771462 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.771487 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.771922 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.772064 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:14 crc kubenswrapper[4637]: E1201 14:47:14.772131 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.785865 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.785974 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.786004 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.786031 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.786053 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.888409 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.888635 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.888643 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.888656 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.888665 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.990546 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.990587 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.990599 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.990615 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:14 crc kubenswrapper[4637]: I1201 14:47:14.990650 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:14Z","lastTransitionTime":"2025-12-01T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.093333 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.093371 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.093381 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.093399 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.093408 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.196148 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.196172 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.196180 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.196193 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.196201 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.202147 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/3.log" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.203040 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/2.log" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.206458 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" exitCode=1 Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.206496 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.206527 4637 scope.go:117] "RemoveContainer" containerID="affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.207456 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:47:15 crc kubenswrapper[4637]: E1201 14:47:15.207641 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.220514 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.250418 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb5c1fe2e8bcddc7d68560234f7171ba395723f229ac9e714eb5cd4048dba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:44Z\\\",\\\"message\\\":\\\"ntroller.go:451] Built service openshift-authentication/oauth-openshift cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-authentication/oauth-openshift_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1201 14:46:44.574687 6238 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"2a3fb1a3-a476-4e14-bcf5-fb79af60206a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"ger_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 14:47:14.558212 6622 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:47:14.558364 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z]\\\\nI1201 14:47:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.267640 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.286786 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.298976 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.299139 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.299185 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.299195 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.299210 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.299220 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.312877 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.327581 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.338060 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.348787 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.362446 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.373712 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.384176 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.393772 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.401354 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.401394 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.401405 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.401420 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.401433 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.402560 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.412513 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.422556 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.434804 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.444426 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee01a4e-140a-47fb-bae0-6dc6088e3a18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226ba1817928934b4987c768dc91c2b010d39f84323f2d84555a9e3418e1563b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:15Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.503363 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.503599 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.503729 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.503859 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.503960 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.606688 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.606749 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.606757 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.606770 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.606781 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.710562 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.711009 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.711111 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.711254 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.711369 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.770546 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:15 crc kubenswrapper[4637]: E1201 14:47:15.770761 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.814486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.814524 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.814534 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.814568 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.814580 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.918399 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.918457 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.918473 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.918495 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:15 crc kubenswrapper[4637]: I1201 14:47:15.918512 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:15Z","lastTransitionTime":"2025-12-01T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.021480 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.021827 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.022057 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.022206 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.022355 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.124778 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.124862 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.124886 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.124914 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.124994 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.211308 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/3.log" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.214375 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:47:16 crc kubenswrapper[4637]: E1201 14:47:16.214513 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.226463 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.226528 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.226537 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.226550 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.226559 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.231393 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.246740 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.258818 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.283384 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"ger_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 14:47:14.558212 6622 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:47:14.558364 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z]\\\\nI1201 14:47:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.294069 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.306808 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.320621 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.328352 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.328389 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.328399 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.328413 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.328421 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.338611 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.355298 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.367709 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.379817 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.392402 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.407785 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.423150 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.430506 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.430548 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.430562 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.430583 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.430597 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.433510 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee01a4e-140a-47fb-bae0-6dc6088e3a18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226ba1817928934b4987c768dc91c2b010d39f84323f2d84555a9e3418e1563b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.446370 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.461388 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.472491 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:16Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.533478 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.533520 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.533532 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.533552 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.533563 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.636764 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.636817 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.636831 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.636848 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.636861 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.740216 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.740263 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.740275 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.740292 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.740305 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.770987 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.771090 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:16 crc kubenswrapper[4637]: E1201 14:47:16.771143 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:16 crc kubenswrapper[4637]: E1201 14:47:16.771277 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.771004 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:16 crc kubenswrapper[4637]: E1201 14:47:16.771440 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.842444 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.842515 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.842538 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.842565 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.842587 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.945400 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.945469 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.945488 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.945512 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:16 crc kubenswrapper[4637]: I1201 14:47:16.945531 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:16Z","lastTransitionTime":"2025-12-01T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.048507 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.048583 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.048619 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.048649 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.048673 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.151904 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.152008 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.152030 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.152060 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.152083 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.254587 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.254671 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.254687 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.254706 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.254720 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.357686 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.357760 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.357783 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.357805 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.357824 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.460735 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.460781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.460792 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.460807 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.460818 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.564097 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.564173 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.564194 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.564221 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.564239 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.667312 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.667386 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.667403 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.667427 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.667444 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.770454 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.770527 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.770547 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.770574 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.770580 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.770591 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: E1201 14:47:17.770747 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.873270 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.873341 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.873358 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.873406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.873424 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.976260 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.976336 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.976355 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.976383 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:17 crc kubenswrapper[4637]: I1201 14:47:17.976403 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:17Z","lastTransitionTime":"2025-12-01T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.080151 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.080223 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.080241 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.080265 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.080284 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.183679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.183735 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.183746 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.183764 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.183775 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.287477 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.287556 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.287574 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.287595 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.287609 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.390704 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.390774 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.390799 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.390826 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.390850 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.493630 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.493694 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.493721 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.493751 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.493774 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.596222 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.596252 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.596261 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.596274 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.596284 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.698178 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.698215 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.698226 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.698241 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.698251 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.770668 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.770760 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.770801 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:18 crc kubenswrapper[4637]: E1201 14:47:18.770924 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:18 crc kubenswrapper[4637]: E1201 14:47:18.771095 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:18 crc kubenswrapper[4637]: E1201 14:47:18.771252 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.800689 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.800730 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.800742 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.800760 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.800772 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.903547 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.903594 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.903605 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.903622 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:18 crc kubenswrapper[4637]: I1201 14:47:18.903634 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:18Z","lastTransitionTime":"2025-12-01T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.006766 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.006842 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.006864 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.006888 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.006906 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.109457 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.109491 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.109500 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.109529 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.109539 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.211731 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.211779 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.211788 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.211802 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.211811 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.314466 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.314505 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.314514 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.314528 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.314537 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.416957 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.416985 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.416993 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.417006 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.417014 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.519376 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.519444 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.519461 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.519484 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.519502 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.622691 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.622787 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.622808 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.622951 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.622982 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.727010 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.727117 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.727163 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.727191 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.727208 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.771240 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:19 crc kubenswrapper[4637]: E1201 14:47:19.771411 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.789525 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.811427 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.826563 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.829997 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.830032 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.830137 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.830156 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.830170 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.839770 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.852523 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.866426 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.878901 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee01a4e-140a-47fb-bae0-6dc6088e3a18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226ba1817928934b4987c768dc91c2b010d39f84323f2d84555a9e3418e1563b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.890394 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.911920 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"ger_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 14:47:14.558212 6622 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:47:14.558364 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z]\\\\nI1201 14:47:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.927167 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.933295 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.933334 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.933348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.933367 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.933384 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:19Z","lastTransitionTime":"2025-12-01T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.938704 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.952202 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.963112 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.977121 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:19 crc kubenswrapper[4637]: I1201 14:47:19.987863 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:19Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.002038 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.014103 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.023671 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:20Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.035661 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.035874 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.036005 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.036126 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.036276 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.139009 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.139300 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.139413 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.139519 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.139613 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.242915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.243045 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.243070 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.243102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.243125 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.345974 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.346342 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.346462 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.346667 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.346783 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.449967 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.450012 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.450028 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.450047 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.450063 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.553178 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.553230 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.553244 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.553267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.553285 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.656478 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.656803 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.656981 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.657107 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.657226 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.760038 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.760077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.760085 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.760099 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.760108 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.771249 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.771295 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.771370 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:20 crc kubenswrapper[4637]: E1201 14:47:20.771374 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:20 crc kubenswrapper[4637]: E1201 14:47:20.771445 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:20 crc kubenswrapper[4637]: E1201 14:47:20.771505 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.862524 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.862847 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.863008 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.863185 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.863303 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.965402 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.965437 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.965448 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.965463 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:20 crc kubenswrapper[4637]: I1201 14:47:20.965474 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:20Z","lastTransitionTime":"2025-12-01T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.067998 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.068051 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.068067 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.068088 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.068103 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.170623 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.170867 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.170958 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.171063 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.171121 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.273763 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.274174 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.274286 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.274395 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.274595 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.377820 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.377880 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.377905 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.377974 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.378001 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.479994 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.480477 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.480753 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.480897 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.481070 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.583770 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.584135 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.584264 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.584393 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.584491 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.689019 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.689077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.689093 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.689115 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.689135 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.771326 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:21 crc kubenswrapper[4637]: E1201 14:47:21.771494 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.791038 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.791069 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.791077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.791090 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.791098 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.894024 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.894095 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.894117 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.894143 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.894163 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.996338 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.996400 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.996423 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.996454 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:21 crc kubenswrapper[4637]: I1201 14:47:21.996474 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:21Z","lastTransitionTime":"2025-12-01T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.098807 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.098854 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.098866 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.098884 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.098896 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.201869 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.201964 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.201993 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.202025 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.202048 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.304720 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.304783 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.304804 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.304825 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.304845 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.407335 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.407396 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.407408 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.407421 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.407431 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.509288 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.509335 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.509346 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.509362 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.509374 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.612073 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.612147 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.612168 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.612192 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.612209 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.715011 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.715049 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.715064 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.715087 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.715101 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.770745 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:22 crc kubenswrapper[4637]: E1201 14:47:22.770982 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.770779 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:22 crc kubenswrapper[4637]: E1201 14:47:22.771136 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.770745 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:22 crc kubenswrapper[4637]: E1201 14:47:22.771252 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.817416 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.817466 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.817486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.817524 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.817542 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.920194 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.920243 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.920256 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.920276 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:22 crc kubenswrapper[4637]: I1201 14:47:22.920290 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:22Z","lastTransitionTime":"2025-12-01T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.023322 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.023367 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.023383 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.023401 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.023415 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.125968 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.126024 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.126043 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.126068 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.126086 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.229174 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.229219 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.229233 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.229251 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.229264 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.331983 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.332042 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.332059 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.332080 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.332095 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.434740 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.434779 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.434790 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.434808 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.434819 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.537207 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.537241 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.537249 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.537264 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.537273 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.639679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.639727 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.639739 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.639757 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.639768 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.742744 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.742798 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.742810 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.742827 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.742840 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.772149 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:23 crc kubenswrapper[4637]: E1201 14:47:23.772283 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.844714 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.844758 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.844766 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.844780 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.844789 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.947450 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.947488 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.947499 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.947517 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:23 crc kubenswrapper[4637]: I1201 14:47:23.947528 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:23Z","lastTransitionTime":"2025-12-01T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.050496 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.050537 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.050549 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.050567 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.050579 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.153349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.153449 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.153492 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.153526 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.153547 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.256420 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.256501 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.256520 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.256548 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.256570 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.359194 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.359237 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.359248 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.359265 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.359277 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.462015 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.462064 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.462077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.462097 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.462110 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.564831 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.564875 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.564889 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.564908 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.564919 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.667505 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.667553 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.667568 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.667589 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.667603 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.770359 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.770401 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.770407 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.770416 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.770434 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.770450 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.770494 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.770541 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.770589 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.770629 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.770683 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.819316 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.819361 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.819373 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.819388 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.819399 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.835245 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.838911 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.839011 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.839041 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.839064 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.839079 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.851843 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.855871 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.855977 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.855995 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.856016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.856032 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.869253 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.872672 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.872729 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.872738 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.872752 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.872762 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.883573 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.886452 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.886478 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.886486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.886499 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.886508 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.896475 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:24Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:24 crc kubenswrapper[4637]: E1201 14:47:24.896607 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.897979 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.898014 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.898025 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.898042 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:24 crc kubenswrapper[4637]: I1201 14:47:24.898053 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:24Z","lastTransitionTime":"2025-12-01T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.000431 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.000466 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.000476 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.000490 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.000501 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.103004 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.103042 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.103050 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.103066 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.103075 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.205783 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.205846 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.205865 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.205890 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.205911 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.309126 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.309200 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.309220 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.309247 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.309268 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.411674 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.411726 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.411742 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.411762 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.411777 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.513993 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.514254 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.514410 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.514571 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.514691 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.617758 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.618077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.618174 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.618303 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.618395 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.721785 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.722399 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.722581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.722734 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.722876 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.770509 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:25 crc kubenswrapper[4637]: E1201 14:47:25.771108 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.826154 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.826225 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.826243 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.826312 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.826334 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.929456 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.929535 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.929562 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.929588 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:25 crc kubenswrapper[4637]: I1201 14:47:25.929607 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:25Z","lastTransitionTime":"2025-12-01T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.032576 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.032656 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.032683 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.032706 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.032731 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.135562 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.135607 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.135620 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.135633 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.135642 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.238031 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.238070 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.238084 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.238101 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.238113 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.340280 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.340310 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.340318 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.340331 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.340639 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.442431 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.442474 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.442485 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.442502 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.442513 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.544820 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.544846 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.544854 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.544866 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.544874 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.648470 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.648501 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.648510 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.648522 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.648531 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.751000 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.751034 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.751042 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.751063 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.751072 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.770640 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.770699 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.770699 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:26 crc kubenswrapper[4637]: E1201 14:47:26.770794 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:26 crc kubenswrapper[4637]: E1201 14:47:26.770899 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:26 crc kubenswrapper[4637]: E1201 14:47:26.771220 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.852908 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.852987 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.853004 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.853024 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.853038 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.955222 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.955253 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.955265 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.955280 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:26 crc kubenswrapper[4637]: I1201 14:47:26.955292 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:26Z","lastTransitionTime":"2025-12-01T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.057527 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.057567 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.057581 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.057600 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.057616 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.160586 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.160608 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.160616 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.160628 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.160639 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.263327 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.263365 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.263374 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.263389 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.263398 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.366804 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.366848 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.366861 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.366879 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.366892 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.470496 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.470569 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.470592 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.470621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.470642 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.573587 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.573646 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.573672 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.573700 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.573722 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.676750 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.677039 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.677149 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.677271 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.677438 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.770530 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:27 crc kubenswrapper[4637]: E1201 14:47:27.770793 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.779793 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.779839 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.779857 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.779877 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.779894 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.883127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.883191 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.883208 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.883232 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.883251 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.926025 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:27 crc kubenswrapper[4637]: E1201 14:47:27.926229 4637 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:47:27 crc kubenswrapper[4637]: E1201 14:47:27.926331 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs podName:435e8f74-9c96-4508-b6a6-a1a2280f8176 nodeName:}" failed. No retries permitted until 2025-12-01 14:48:31.926304142 +0000 UTC m=+162.444013010 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs") pod "network-metrics-daemon-7w2l8" (UID: "435e8f74-9c96-4508-b6a6-a1a2280f8176") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.986772 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.986844 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.986865 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.986896 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:27 crc kubenswrapper[4637]: I1201 14:47:27.986919 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:27Z","lastTransitionTime":"2025-12-01T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.089999 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.090076 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.090110 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.090132 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.090143 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.192765 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.192849 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.192877 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.192907 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.192925 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.296302 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.296358 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.296377 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.296401 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.296417 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.399153 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.399224 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.399242 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.399269 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.399306 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.501914 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.502314 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.502350 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.502382 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.502405 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.605605 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.605649 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.605660 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.605675 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.605687 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.707960 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.707997 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.708008 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.708024 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.708043 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.771273 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.771464 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.771761 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:28 crc kubenswrapper[4637]: E1201 14:47:28.772152 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:28 crc kubenswrapper[4637]: E1201 14:47:28.772229 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:28 crc kubenswrapper[4637]: E1201 14:47:28.772306 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.810815 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.810962 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.810991 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.811015 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.811032 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.913331 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.913374 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.913386 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.913401 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:28 crc kubenswrapper[4637]: I1201 14:47:28.913412 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:28Z","lastTransitionTime":"2025-12-01T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.015979 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.016031 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.016047 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.016065 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.016076 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.119407 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.119467 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.119479 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.119498 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.119511 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.222625 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.222703 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.222740 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.222765 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.222781 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.326422 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.326484 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.326503 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.326529 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.326546 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.429257 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.429303 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.429315 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.429332 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.429345 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.532177 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.532208 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.532215 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.532228 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.532237 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.634621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.634650 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.634658 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.634670 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.634678 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.737869 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.737956 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.737972 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.737991 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.738005 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.770800 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:29 crc kubenswrapper[4637]: E1201 14:47:29.771083 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.792849 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.807137 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5vnrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbdafb4-bc82-462e-be58-844b876172c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b70688d12fa0e6084bce0c5df2a422a4c4704d446d40c15ccf470522cbc53cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s62lk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5vnrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.825714 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.835268 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-blxft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92faa232-0163-4022-8f1c-ade68529f250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea3110b4763fedb47a47f8ef0bb8603c30be6d478c201bde5e1182061f0101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rccrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-blxft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.840394 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.840429 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.840438 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.840453 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.840462 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.847375 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80c54f70-de42-4510-8ae3-d5ef74e13ab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7e6ae528b43e9343dd5219dc19ef92a71343f883c073d0b611b107b82033f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cff2815328d07dee4e35812d50827e47165f3cc2ba0bd2fb8e2e230d29da54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7gdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gdttc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.856594 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"435e8f74-9c96-4508-b6a6-a1a2280f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdlds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7w2l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.868889 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a52fb4-59fd-4808-a6ae-cf612eba5432\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b7c94a1bccaad0c845bfdcff00d0081a1ac971672639f0bbc79954e5868058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6caa92a7fb2d8c557da9fd84eede71f03593af03cbef84ceb931fa161bbae109\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4df77482d245fa32a1773c5625d26fda6ecfbe8150ac9f1fb8b2258a4a54fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.879988 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ee01a4e-140a-47fb-bae0-6dc6088e3a18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226ba1817928934b4987c768dc91c2b010d39f84323f2d84555a9e3418e1563b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6289a7db0399411b7aa516cf9953e3e199b7c83e4858133819dc0b436ffcddfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.891741 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e664fd5a7cac9fa71e2e8e338ebddbacc4c91fbeb6ecb9bcca5642f80a8b6ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ddf0961dd2e360f64a52d341b76104503e7cfa9bf8df7265a4db0cf2bda7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.903248 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.917683 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e564b19-2536-41be-874f-622840ea7ea1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T14:46:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 14:46:03.036449 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 14:46:03.037510 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3449711387/tls.crt::/tmp/serving-cert-3449711387/tls.key\\\\\\\"\\\\nI1201 14:46:08.646225 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 14:46:08.648405 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 14:46:08.648422 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 14:46:08.648444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 14:46:08.648451 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 14:46:08.652912 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 14:46:08.652973 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652977 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 14:46:08.652981 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 14:46:08.652985 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 14:46:08.652987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 14:46:08.652990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 14:46:08.653110 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 14:46:08.655256 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.929997 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42c60d8d849bf916a8e24f5d8186db0b95ba8c394edba507b3d45c313a7e7004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.941893 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2db6c86b-ff8c-4746-9c91-7dac0498c0b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75dd9e1538feee5cff0298fca1c72752c005aff341a13502f07697731f07ce72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjm8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7rjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.942787 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.942813 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.942823 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.942836 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.942845 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:29Z","lastTransitionTime":"2025-12-01T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.958416 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:47:14Z\\\",\\\"message\\\":\\\"ger_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 14:47:14.558212 6622 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 14:47:14.558364 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:14Z is after 2025-08-24T17:21:41Z]\\\\nI1201 14:47:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rhl62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.969902 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d5895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4131bcca-3504-4255-879d-7921162a335c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5740d65aaf14069cace7ecf8b5a63e990faffe3beb61e59e2cf5b9cf0e8caaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b534aae7d90a380500207955e5d50c01979e6bbf4590e686816332c1305c277e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff5a9807ec808e7dadb8201029e23913d3365035d44d57f8853c43cf15114d0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8534221ccfeeeaefc632c97d5251ed4d7daf862a47f2aff6860a64aaffd66701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ac4ab065b09f452d9124d267251325b0ee6b71c2449fe0fa445849bffa631d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c5e5f9592c3e77de61071fe2367d6b47c8b26a2c46c3e9d84667239f69fc45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030544cca4a48ba4628b25402ab6bddbf511a810232f51b21afd8e134f33afe2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njg79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d5895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.978988 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"574717bd-1b8a-4875-a64c-e1d4d2ac7204\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c542c15fb110a9f497f95bd8f0abeeec65e9b41552d5c428012654d7ecb8bf75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ea0f19f27900daddcad33f43457b692b87c8948009801adfc7ee636e49e7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1844efe5596b8a4c272d93b0b118516d918bcbafc8f469c49a5c01ef90db42c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b75d6a35f5c0d1532624b1b3dbee1fe360046b2f6ba685411684c93f8b557d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T14:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T14:45:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:45:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:29 crc kubenswrapper[4637]: I1201 14:47:29.993646 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409b7473c6757b2cfdde255a70cc983cb0383be4dd54a86449c020da43dd5591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:29Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.007067 4637 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n2brl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64d8237-8116-4742-8d7f-9f6e8018e4c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T14:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T14:46:56Z\\\",\\\"message\\\":\\\"2025-12-01T14:46:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276\\\\n2025-12-01T14:46:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_030b68d2-5d99-46f3-9a21-70074a0d7276 to /host/opt/cni/bin/\\\\n2025-12-01T14:46:11Z [verbose] multus-daemon started\\\\n2025-12-01T14:46:11Z [verbose] Readiness Indicator file check\\\\n2025-12-01T14:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T14:46:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T14:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T14:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n2brl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:30Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.045713 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.045752 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.045765 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.045781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.045794 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.148708 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.148771 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.148789 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.148816 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.148833 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.251923 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.252016 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.252032 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.252057 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.252074 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.355077 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.355136 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.355206 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.355237 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.355481 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.458287 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.458315 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.458323 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.458337 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.458345 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.560210 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.560245 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.560253 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.560265 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.560273 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.662777 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.662819 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.662830 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.662847 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.662857 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.764594 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.764638 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.764648 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.764661 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.764670 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.771031 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.771107 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:30 crc kubenswrapper[4637]: E1201 14:47:30.771134 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:30 crc kubenswrapper[4637]: E1201 14:47:30.771266 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.771292 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:30 crc kubenswrapper[4637]: E1201 14:47:30.771351 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.867079 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.867158 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.867174 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.867190 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.867202 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.969470 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.969538 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.969553 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.969568 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:30 crc kubenswrapper[4637]: I1201 14:47:30.969579 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:30Z","lastTransitionTime":"2025-12-01T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.071747 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.071814 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.071824 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.071851 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.071872 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.174086 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.174122 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.174133 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.174147 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.174157 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.276064 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.276092 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.276100 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.276112 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.276122 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.378405 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.378487 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.378508 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.378528 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.378543 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.481553 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.481608 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.481631 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.481658 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.481676 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.583529 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.583572 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.583585 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.583601 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.583614 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.686503 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.686556 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.686570 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.686597 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.686610 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.771131 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:31 crc kubenswrapper[4637]: E1201 14:47:31.771611 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.771631 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:47:31 crc kubenswrapper[4637]: E1201 14:47:31.772038 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.784806 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.789173 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.789210 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.789221 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.789236 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.789246 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.891167 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.891218 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.891233 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.891253 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.891268 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.993267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.993311 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.993323 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.993341 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:31 crc kubenswrapper[4637]: I1201 14:47:31.993353 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:31Z","lastTransitionTime":"2025-12-01T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.096339 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.096381 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.096398 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.096419 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.096435 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.198385 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.198427 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.198440 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.198458 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.198469 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.300905 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.301001 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.301021 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.301045 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.301062 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.403564 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.403612 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.403623 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.403638 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.403650 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.505819 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.505852 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.505863 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.505878 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.505888 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.608354 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.608382 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.608390 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.608403 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.608411 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.710732 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.710764 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.710774 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.710787 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.710795 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.770393 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.770474 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.770413 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:32 crc kubenswrapper[4637]: E1201 14:47:32.770538 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:32 crc kubenswrapper[4637]: E1201 14:47:32.770628 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:32 crc kubenswrapper[4637]: E1201 14:47:32.770763 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.813727 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.813798 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.813821 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.813847 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.813865 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.915867 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.915899 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.915909 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.915926 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:32 crc kubenswrapper[4637]: I1201 14:47:32.915972 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:32Z","lastTransitionTime":"2025-12-01T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.018589 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.018614 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.018622 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.018634 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.018642 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.121879 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.122011 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.122032 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.122061 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.122079 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.224842 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.224895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.224911 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.224954 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.224971 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.328365 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.328427 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.328446 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.328470 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.328488 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.431537 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.431668 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.431685 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.431711 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.431727 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.534433 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.534492 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.534510 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.534535 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.534554 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.637745 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.637797 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.637814 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.637837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.637853 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.741705 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.741756 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.741769 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.741785 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.741795 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.771375 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:33 crc kubenswrapper[4637]: E1201 14:47:33.771789 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.845225 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.845298 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.845319 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.845348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.845387 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.948186 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.948248 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.948274 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.948304 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:33 crc kubenswrapper[4637]: I1201 14:47:33.948328 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:33Z","lastTransitionTime":"2025-12-01T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.052400 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.052470 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.052550 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.052583 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.052606 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.155832 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.155906 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.155982 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.156015 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.156044 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.260285 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.260406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.260468 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.260504 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.260559 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.364849 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.364950 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.364963 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.364985 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.365000 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.467820 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.467895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.467908 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.467956 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.467970 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.571099 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.571128 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.571135 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.571148 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.571156 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.673414 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.673460 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.673474 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.673511 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.673526 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.770312 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.770315 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:34 crc kubenswrapper[4637]: E1201 14:47:34.770504 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:34 crc kubenswrapper[4637]: E1201 14:47:34.770424 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.770318 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:34 crc kubenswrapper[4637]: E1201 14:47:34.770577 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.776691 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.776753 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.776775 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.776822 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.776845 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.879597 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.879642 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.879655 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.879672 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.879685 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.982044 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.982112 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.982131 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.982155 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:34 crc kubenswrapper[4637]: I1201 14:47:34.982182 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:34Z","lastTransitionTime":"2025-12-01T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.085437 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.085486 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.085504 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.085531 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.085565 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.185442 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.185542 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.185567 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.185597 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.185619 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: E1201 14:47:35.203637 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:35Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.207872 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.207896 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.207903 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.207915 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.207925 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: E1201 14:47:35.219355 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:35Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.223676 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.223738 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.223756 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.223779 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.223796 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: E1201 14:47:35.240136 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:35Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.244439 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.244490 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.244513 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.244536 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.244552 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: E1201 14:47:35.258219 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:35Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.267492 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.267521 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.267529 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.267541 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.267551 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: E1201 14:47:35.285027 4637 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T14:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d9978c86-16e7-4847-903a-8e83206e0eb1\\\",\\\"systemUUID\\\":\\\"e36facbc-e27f-4191-ad98-6bea77d7ef5d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T14:47:35Z is after 2025-08-24T17:21:41Z" Dec 01 14:47:35 crc kubenswrapper[4637]: E1201 14:47:35.285134 4637 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.286368 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.286416 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.286433 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.286455 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.286474 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.389072 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.389134 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.389151 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.389174 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.389216 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.492052 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.492114 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.492135 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.492167 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.492187 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.594979 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.595007 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.595015 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.595029 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.595037 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.698612 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.698653 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.698663 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.698679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.698690 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.770394 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:35 crc kubenswrapper[4637]: E1201 14:47:35.770682 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.800360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.800397 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.800406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.800591 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.800606 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.904817 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.906020 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.906428 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.906692 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:35 crc kubenswrapper[4637]: I1201 14:47:35.906907 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:35Z","lastTransitionTime":"2025-12-01T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.009669 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.009729 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.009739 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.009754 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.009764 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.111920 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.111975 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.111988 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.112002 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.112011 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.213978 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.214013 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.214024 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.214039 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.214050 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.316005 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.316479 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.316647 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.316859 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.317033 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.423339 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.423417 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.423460 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.423491 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.423509 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.527970 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.528069 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.528097 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.528130 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.528154 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.632127 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.632186 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.632205 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.632228 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.632246 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.735291 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.735455 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.735482 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.735510 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.735532 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.771254 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.771254 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:36 crc kubenswrapper[4637]: E1201 14:47:36.771499 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:36 crc kubenswrapper[4637]: E1201 14:47:36.771630 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.771447 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:36 crc kubenswrapper[4637]: E1201 14:47:36.771844 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.838073 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.838129 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.838141 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.838159 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.838172 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.942252 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.942322 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.942349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.942379 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:36 crc kubenswrapper[4637]: I1201 14:47:36.942402 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:36Z","lastTransitionTime":"2025-12-01T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.045132 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.045205 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.045227 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.045254 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.045275 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.147776 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.147840 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.147864 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.147923 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.148011 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.251067 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.251142 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.251166 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.251195 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.251218 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.354037 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.354095 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.354112 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.354137 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.354156 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.457469 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.457540 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.457562 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.457591 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.457616 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.560895 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.560947 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.560962 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.560980 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.560994 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.664349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.664427 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.664451 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.664478 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.664498 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.767216 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.767287 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.767321 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.767350 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.767373 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.771279 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:37 crc kubenswrapper[4637]: E1201 14:47:37.771541 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.870359 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.870414 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.870423 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.870437 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.870447 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.973315 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.973360 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.973370 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.973386 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:37 crc kubenswrapper[4637]: I1201 14:47:37.973398 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:37Z","lastTransitionTime":"2025-12-01T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.075553 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.075634 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.075647 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.075667 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.075681 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.178452 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.178493 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.178505 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.178524 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.178540 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.281090 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.281132 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.281143 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.281158 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.281168 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.383087 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.383134 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.383143 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.383155 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.383389 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.486234 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.486259 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.486267 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.486279 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.486288 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.588789 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.588817 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.588825 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.588838 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.588846 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.690818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.690853 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.690862 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.690880 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.690896 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.771152 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.771160 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.771744 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:38 crc kubenswrapper[4637]: E1201 14:47:38.771889 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:38 crc kubenswrapper[4637]: E1201 14:47:38.772171 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:38 crc kubenswrapper[4637]: E1201 14:47:38.772475 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.794268 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.794702 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.795166 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.795518 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.795686 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.898316 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.898600 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.898664 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.898725 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:38 crc kubenswrapper[4637]: I1201 14:47:38.898781 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:38Z","lastTransitionTime":"2025-12-01T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.001746 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.001837 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.001882 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.001913 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.001990 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.104219 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.104263 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.104274 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.104291 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.104302 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.208114 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.209584 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.209755 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.209860 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.210015 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.312234 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.312283 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.312294 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.312309 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.312320 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.414752 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.414791 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.414802 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.414818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.414830 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.517129 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.517169 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.517180 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.517195 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.517207 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.620296 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.620609 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.620726 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.620830 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.620953 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.724011 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.724063 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.724079 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.724102 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.724118 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.770923 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:39 crc kubenswrapper[4637]: E1201 14:47:39.771365 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.809618 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-blxft" podStartSLOduration=90.809574869 podStartE2EDuration="1m30.809574869s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:39.809066505 +0000 UTC m=+110.326775353" watchObservedRunningTime="2025-12-01 14:47:39.809574869 +0000 UTC m=+110.327283707" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.828584 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.828620 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.828635 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.828657 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.828672 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.837915 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gdttc" podStartSLOduration=89.837893996 podStartE2EDuration="1m29.837893996s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:39.822343465 +0000 UTC m=+110.340052303" watchObservedRunningTime="2025-12-01 14:47:39.837893996 +0000 UTC m=+110.355602844" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.856401 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.856382347 podStartE2EDuration="1m29.856382347s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:39.855996367 +0000 UTC m=+110.373705205" watchObservedRunningTime="2025-12-01 14:47:39.856382347 +0000 UTC m=+110.374091175" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.880972 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=41.880923781999996 podStartE2EDuration="41.880923782s" podCreationTimestamp="2025-12-01 14:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:39.868883376 +0000 UTC m=+110.386592214" watchObservedRunningTime="2025-12-01 14:47:39.880923782 +0000 UTC m=+110.398632620" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.914290 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.914273775 podStartE2EDuration="1m31.914273775s" podCreationTimestamp="2025-12-01 14:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:39.913782722 +0000 UTC m=+110.431491550" watchObservedRunningTime="2025-12-01 14:47:39.914273775 +0000 UTC m=+110.431982603" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.931245 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.931280 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.931288 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.931300 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.931309 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:39Z","lastTransitionTime":"2025-12-01T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:39 crc kubenswrapper[4637]: I1201 14:47:39.981785 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podStartSLOduration=90.981766285 podStartE2EDuration="1m30.981766285s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:39.955860223 +0000 UTC m=+110.473569111" watchObservedRunningTime="2025-12-01 14:47:39.981766285 +0000 UTC m=+110.499475113" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.022399 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.022373015 podStartE2EDuration="9.022373015s" podCreationTimestamp="2025-12-01 14:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:40.021556403 +0000 UTC m=+110.539265231" watchObservedRunningTime="2025-12-01 14:47:40.022373015 +0000 UTC m=+110.540081843" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.022802 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d5895" podStartSLOduration=91.022796047 podStartE2EDuration="1m31.022796047s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:39.998242031 +0000 UTC m=+110.515950859" watchObservedRunningTime="2025-12-01 14:47:40.022796047 +0000 UTC m=+110.540504875" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.032707 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.032736 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.032744 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.032756 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.032765 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.038199 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.038162263 podStartE2EDuration="58.038162263s" podCreationTimestamp="2025-12-01 14:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:40.036910369 +0000 UTC m=+110.554619207" watchObservedRunningTime="2025-12-01 14:47:40.038162263 +0000 UTC m=+110.555871091" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.074770 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-n2brl" podStartSLOduration=91.074753584 podStartE2EDuration="1m31.074753584s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:40.074524058 +0000 UTC m=+110.592232906" watchObservedRunningTime="2025-12-01 14:47:40.074753584 +0000 UTC m=+110.592462412" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.102800 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5vnrh" podStartSLOduration=92.102781824 podStartE2EDuration="1m32.102781824s" podCreationTimestamp="2025-12-01 14:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:40.102419264 +0000 UTC m=+110.620128092" watchObservedRunningTime="2025-12-01 14:47:40.102781824 +0000 UTC m=+110.620490652" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.134877 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.134913 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.134922 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.134959 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.134969 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.237842 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.238430 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.238527 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.238631 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.238720 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.340664 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.340709 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.340721 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.340737 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.340749 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.442743 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.442774 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.442787 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.442803 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.442814 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.544975 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.545020 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.545033 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.545050 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.545061 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.647638 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.647754 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.647827 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.647859 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.647926 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.750733 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.750796 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.750816 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.750841 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.750862 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.771387 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:40 crc kubenswrapper[4637]: E1201 14:47:40.771557 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.771415 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.771387 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:40 crc kubenswrapper[4637]: E1201 14:47:40.771707 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:40 crc kubenswrapper[4637]: E1201 14:47:40.771832 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.854343 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.854393 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.854405 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.854423 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.854435 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.957708 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.957840 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.957863 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.957894 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:40 crc kubenswrapper[4637]: I1201 14:47:40.957916 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:40Z","lastTransitionTime":"2025-12-01T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.061349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.061415 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.061435 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.061462 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.061482 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.164872 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.164974 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.165003 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.165033 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.165050 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.267525 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.267560 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.267572 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.267588 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.267602 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.370255 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.370333 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.370355 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.370383 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.370404 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.473080 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.473129 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.473141 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.473160 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.473176 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.576193 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.576235 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.576247 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.576264 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.576275 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.678679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.678990 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.679111 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.679243 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.679344 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.770584 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:41 crc kubenswrapper[4637]: E1201 14:47:41.770820 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.781598 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.781741 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.781808 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.781886 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.781985 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.885153 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.885228 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.885250 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.885277 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.885308 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.987911 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.988250 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.988348 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.988428 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:41 crc kubenswrapper[4637]: I1201 14:47:41.988519 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:41Z","lastTransitionTime":"2025-12-01T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.090913 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.090963 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.090973 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.090987 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.090996 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.193786 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.193822 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.193862 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.193881 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.193915 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.298978 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.299036 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.299048 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.299064 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.299074 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.400831 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.400864 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.400876 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.400890 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.400902 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.503105 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.503423 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.503556 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.503666 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.503746 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.605718 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.605750 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.605760 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.605774 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.605784 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.710923 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.710978 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.710990 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.711008 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.711017 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.770968 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.771078 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:42 crc kubenswrapper[4637]: E1201 14:47:42.771094 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:42 crc kubenswrapper[4637]: E1201 14:47:42.771276 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.770987 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:42 crc kubenswrapper[4637]: E1201 14:47:42.771598 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.813279 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.813333 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.813349 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.813370 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.813387 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.915963 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.916022 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.916038 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.916064 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:42 crc kubenswrapper[4637]: I1201 14:47:42.916080 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:42Z","lastTransitionTime":"2025-12-01T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.019019 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.019055 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.019066 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.019083 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.019097 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.121818 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.122149 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.122257 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.122322 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.122394 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.225684 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.225769 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.225789 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.225812 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.225831 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.309138 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/1.log" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.309981 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/0.log" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.310022 4637 generic.go:334] "Generic (PLEG): container finished" podID="f64d8237-8116-4742-8d7f-9f6e8018e4c2" containerID="a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26" exitCode=1 Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.310050 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n2brl" event={"ID":"f64d8237-8116-4742-8d7f-9f6e8018e4c2","Type":"ContainerDied","Data":"a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.310080 4637 scope.go:117] "RemoveContainer" containerID="837b3736d870273e47710801dd049878293fed546331ec0898783b64af3579da" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.310597 4637 scope.go:117] "RemoveContainer" containerID="a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26" Dec 01 14:47:43 crc kubenswrapper[4637]: E1201 14:47:43.310808 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-n2brl_openshift-multus(f64d8237-8116-4742-8d7f-9f6e8018e4c2)\"" pod="openshift-multus/multus-n2brl" podUID="f64d8237-8116-4742-8d7f-9f6e8018e4c2" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.330148 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.330188 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.330196 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.330211 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.330222 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.432229 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.432280 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.432290 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.432304 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.432313 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.534324 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.534542 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.534610 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.534681 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.534736 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.637753 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.637790 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.637799 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.637811 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.637821 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.739561 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.739597 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.739613 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.739634 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.739648 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.770350 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:43 crc kubenswrapper[4637]: E1201 14:47:43.770545 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.841659 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.841885 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.841969 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.842054 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.842121 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.944291 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.944326 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.944336 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.944352 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:43 crc kubenswrapper[4637]: I1201 14:47:43.944363 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:43Z","lastTransitionTime":"2025-12-01T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.047044 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.047829 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.048293 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.048611 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.049115 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.152292 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.152324 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.152336 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.152351 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.152361 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.255441 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.255493 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.255503 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.255520 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.255531 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.318394 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/1.log" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.357357 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.357395 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.357406 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.357423 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.357436 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.460158 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.460230 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.460253 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.460281 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.460306 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.564471 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.564521 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.564532 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.564548 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.564563 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.667625 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.667729 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.667751 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.667781 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.667809 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.771017 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:44 crc kubenswrapper[4637]: E1201 14:47:44.771166 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.771215 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:44 crc kubenswrapper[4637]: E1201 14:47:44.771396 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.771571 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.771607 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.771621 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.771639 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.771655 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.771757 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:44 crc kubenswrapper[4637]: E1201 14:47:44.771892 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.875319 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.876146 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.876223 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.876257 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.876337 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.978784 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.978826 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.978835 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.978849 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:44 crc kubenswrapper[4637]: I1201 14:47:44.978859 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:44Z","lastTransitionTime":"2025-12-01T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.080913 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.081220 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.081365 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.081505 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.081628 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:45Z","lastTransitionTime":"2025-12-01T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.183751 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.183799 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.183815 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.183836 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.183853 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:45Z","lastTransitionTime":"2025-12-01T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.286537 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.286600 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.286618 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.286642 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.286659 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:45Z","lastTransitionTime":"2025-12-01T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.301480 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.301645 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.301679 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.301714 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.301735 4637 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T14:47:45Z","lastTransitionTime":"2025-12-01T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.373686 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c"] Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.374597 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.376548 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.376865 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.376893 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.378604 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.418560 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.418685 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.418724 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.419036 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.419158 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.520853 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.520989 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.521027 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.521070 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.521159 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.521700 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.521776 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.524366 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.529326 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.550294 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8ec86a4-ebd7-4fa0-84ac-703fb4dde001-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrr4c\" (UID: \"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.696305 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.770905 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:45 crc kubenswrapper[4637]: E1201 14:47:45.771036 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:45 crc kubenswrapper[4637]: I1201 14:47:45.771977 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:47:45 crc kubenswrapper[4637]: E1201 14:47:45.772256 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rhl62_openshift-ovn-kubernetes(d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" Dec 01 14:47:46 crc kubenswrapper[4637]: I1201 14:47:46.327074 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" event={"ID":"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001","Type":"ContainerStarted","Data":"454087a20de92d11b33dc6ac3e3cd4af1d6973397f6fc2bcb41e5665a410cb47"} Dec 01 14:47:46 crc kubenswrapper[4637]: I1201 14:47:46.327117 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" event={"ID":"b8ec86a4-ebd7-4fa0-84ac-703fb4dde001","Type":"ContainerStarted","Data":"0cd599a4c80e2636dfa03a52620f9f4f07c88b44b60f84b75f916f633ae69abd"} Dec 01 14:47:46 crc kubenswrapper[4637]: I1201 14:47:46.771296 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:46 crc kubenswrapper[4637]: I1201 14:47:46.771296 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:46 crc kubenswrapper[4637]: I1201 14:47:46.771375 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:46 crc kubenswrapper[4637]: E1201 14:47:46.771636 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:46 crc kubenswrapper[4637]: E1201 14:47:46.771831 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:46 crc kubenswrapper[4637]: E1201 14:47:46.771904 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:47 crc kubenswrapper[4637]: I1201 14:47:47.771168 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:47 crc kubenswrapper[4637]: E1201 14:47:47.771374 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:48 crc kubenswrapper[4637]: I1201 14:47:48.770464 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:48 crc kubenswrapper[4637]: I1201 14:47:48.770518 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:48 crc kubenswrapper[4637]: I1201 14:47:48.770481 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:48 crc kubenswrapper[4637]: E1201 14:47:48.770650 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:48 crc kubenswrapper[4637]: E1201 14:47:48.770761 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:48 crc kubenswrapper[4637]: E1201 14:47:48.770854 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:49 crc kubenswrapper[4637]: I1201 14:47:49.772095 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:49 crc kubenswrapper[4637]: E1201 14:47:49.772308 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:49 crc kubenswrapper[4637]: E1201 14:47:49.807243 4637 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 01 14:47:49 crc kubenswrapper[4637]: E1201 14:47:49.946553 4637 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 14:47:50 crc kubenswrapper[4637]: I1201 14:47:50.771026 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:50 crc kubenswrapper[4637]: I1201 14:47:50.771099 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:50 crc kubenswrapper[4637]: I1201 14:47:50.771121 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:50 crc kubenswrapper[4637]: E1201 14:47:50.771495 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:50 crc kubenswrapper[4637]: E1201 14:47:50.771679 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:50 crc kubenswrapper[4637]: E1201 14:47:50.771795 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:51 crc kubenswrapper[4637]: I1201 14:47:51.771368 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:51 crc kubenswrapper[4637]: E1201 14:47:51.771489 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:52 crc kubenswrapper[4637]: I1201 14:47:52.770581 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:52 crc kubenswrapper[4637]: E1201 14:47:52.770760 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:52 crc kubenswrapper[4637]: I1201 14:47:52.771102 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:52 crc kubenswrapper[4637]: E1201 14:47:52.771254 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:52 crc kubenswrapper[4637]: I1201 14:47:52.771321 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:52 crc kubenswrapper[4637]: E1201 14:47:52.771493 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:53 crc kubenswrapper[4637]: I1201 14:47:53.771196 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:53 crc kubenswrapper[4637]: E1201 14:47:53.771378 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:54 crc kubenswrapper[4637]: I1201 14:47:54.771030 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:54 crc kubenswrapper[4637]: I1201 14:47:54.771067 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:54 crc kubenswrapper[4637]: I1201 14:47:54.771039 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:54 crc kubenswrapper[4637]: E1201 14:47:54.771146 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:54 crc kubenswrapper[4637]: E1201 14:47:54.771241 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:54 crc kubenswrapper[4637]: E1201 14:47:54.771329 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:54 crc kubenswrapper[4637]: E1201 14:47:54.947865 4637 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 14:47:55 crc kubenswrapper[4637]: I1201 14:47:55.770713 4637 scope.go:117] "RemoveContainer" containerID="a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26" Dec 01 14:47:55 crc kubenswrapper[4637]: I1201 14:47:55.770787 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:55 crc kubenswrapper[4637]: E1201 14:47:55.771214 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:55 crc kubenswrapper[4637]: I1201 14:47:55.794767 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrr4c" podStartSLOduration=106.794753445 podStartE2EDuration="1m46.794753445s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:46.341241279 +0000 UTC m=+116.858950107" watchObservedRunningTime="2025-12-01 14:47:55.794753445 +0000 UTC m=+126.312462273" Dec 01 14:47:56 crc kubenswrapper[4637]: I1201 14:47:56.362059 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/1.log" Dec 01 14:47:56 crc kubenswrapper[4637]: I1201 14:47:56.362110 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n2brl" event={"ID":"f64d8237-8116-4742-8d7f-9f6e8018e4c2","Type":"ContainerStarted","Data":"9cd2c8aa79d76f9a0e2c45cff0962ab688d8220c4f01310db1c1cd4d4910c4e4"} Dec 01 14:47:56 crc kubenswrapper[4637]: I1201 14:47:56.771198 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:56 crc kubenswrapper[4637]: I1201 14:47:56.771274 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:56 crc kubenswrapper[4637]: I1201 14:47:56.771301 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:56 crc kubenswrapper[4637]: E1201 14:47:56.771347 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:56 crc kubenswrapper[4637]: E1201 14:47:56.771618 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:56 crc kubenswrapper[4637]: E1201 14:47:56.771683 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:56 crc kubenswrapper[4637]: I1201 14:47:56.771981 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:47:57 crc kubenswrapper[4637]: I1201 14:47:57.367282 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/3.log" Dec 01 14:47:57 crc kubenswrapper[4637]: I1201 14:47:57.369520 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerStarted","Data":"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1"} Dec 01 14:47:57 crc kubenswrapper[4637]: I1201 14:47:57.370417 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:47:57 crc kubenswrapper[4637]: I1201 14:47:57.399811 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podStartSLOduration=107.399797128 podStartE2EDuration="1m47.399797128s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:47:57.399044707 +0000 UTC m=+127.916753535" watchObservedRunningTime="2025-12-01 14:47:57.399797128 +0000 UTC m=+127.917505956" Dec 01 14:47:57 crc kubenswrapper[4637]: I1201 14:47:57.527924 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7w2l8"] Dec 01 14:47:57 crc kubenswrapper[4637]: I1201 14:47:57.528063 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:57 crc kubenswrapper[4637]: E1201 14:47:57.528157 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:58 crc kubenswrapper[4637]: I1201 14:47:58.972193 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:47:58 crc kubenswrapper[4637]: E1201 14:47:58.972545 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:47:58 crc kubenswrapper[4637]: I1201 14:47:58.972243 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:47:58 crc kubenswrapper[4637]: E1201 14:47:58.972632 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:47:58 crc kubenswrapper[4637]: I1201 14:47:58.972276 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:47:58 crc kubenswrapper[4637]: E1201 14:47:58.972702 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:47:58 crc kubenswrapper[4637]: I1201 14:47:58.972213 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:47:58 crc kubenswrapper[4637]: E1201 14:47:58.972849 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:47:59 crc kubenswrapper[4637]: E1201 14:47:59.948577 4637 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 14:48:00 crc kubenswrapper[4637]: I1201 14:48:00.771913 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:48:00 crc kubenswrapper[4637]: E1201 14:48:00.772095 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:48:00 crc kubenswrapper[4637]: I1201 14:48:00.772294 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:00 crc kubenswrapper[4637]: E1201 14:48:00.772363 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:48:00 crc kubenswrapper[4637]: I1201 14:48:00.772579 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:48:00 crc kubenswrapper[4637]: E1201 14:48:00.772671 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:48:00 crc kubenswrapper[4637]: I1201 14:48:00.772819 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:00 crc kubenswrapper[4637]: E1201 14:48:00.773147 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:48:02 crc kubenswrapper[4637]: I1201 14:48:02.771355 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:48:02 crc kubenswrapper[4637]: I1201 14:48:02.771370 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:02 crc kubenswrapper[4637]: I1201 14:48:02.771401 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:02 crc kubenswrapper[4637]: I1201 14:48:02.771463 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:48:02 crc kubenswrapper[4637]: E1201 14:48:02.771553 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:48:02 crc kubenswrapper[4637]: E1201 14:48:02.771614 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:48:02 crc kubenswrapper[4637]: E1201 14:48:02.771752 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:48:02 crc kubenswrapper[4637]: E1201 14:48:02.771861 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:48:04 crc kubenswrapper[4637]: I1201 14:48:04.770523 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:04 crc kubenswrapper[4637]: I1201 14:48:04.770763 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:04 crc kubenswrapper[4637]: E1201 14:48:04.770875 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 14:48:04 crc kubenswrapper[4637]: I1201 14:48:04.770921 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:48:04 crc kubenswrapper[4637]: I1201 14:48:04.771036 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:48:04 crc kubenswrapper[4637]: E1201 14:48:04.771126 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 14:48:04 crc kubenswrapper[4637]: E1201 14:48:04.771203 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 14:48:04 crc kubenswrapper[4637]: E1201 14:48:04.771323 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7w2l8" podUID="435e8f74-9c96-4508-b6a6-a1a2280f8176" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.733377 4637 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.782667 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jtkrh"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.783294 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.786798 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.787606 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.790238 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lmzmr"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.791316 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.799022 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.799828 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.800303 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.801921 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.802272 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.802590 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.817682 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.817990 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.818042 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.817991 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.818211 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.818255 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.818333 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.818452 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.818811 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.818982 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.819021 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.819202 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.819370 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.819506 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.819961 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.821513 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.821846 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8t9kz"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.822283 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.822590 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zllnr"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.822634 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.822696 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.822866 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.822602 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.823506 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.827593 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvhqq"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.828075 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.830740 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8rphr"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.832712 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-z2gpq"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.833763 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.833882 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.834789 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.834977 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.835256 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.835550 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.835568 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.835848 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.836233 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-98z2t"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.836724 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.837594 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.843135 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.844065 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.845295 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.846835 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.846869 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-etcd-client\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.846889 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-serving-cert\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.846917 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrnvz\" (UniqueName: \"kubernetes.io/projected/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-kube-api-access-mrnvz\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.846995 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zllnr"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847043 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-console-config\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847184 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-audit-policies\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847235 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-serving-cert\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847280 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-audit-dir\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847320 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847590 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qgh\" (UniqueName: \"kubernetes.io/projected/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-kube-api-access-46qgh\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847638 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-oauth-config\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847675 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-oauth-serving-cert\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847720 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847821 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847875 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-encryption-config\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847902 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-service-ca\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847959 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv54r\" (UniqueName: \"kubernetes.io/projected/6462925c-d528-4dd6-a6e1-55563db83168-kube-api-access-sv54r\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.847989 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-trusted-ca-bundle\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.849597 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8rphr"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.860616 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.861330 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.861485 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.861656 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.861785 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.861872 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.861892 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.861959 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862035 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862057 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862102 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862148 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862161 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862179 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862279 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862339 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862388 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862416 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862479 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862506 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862657 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862418 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862769 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.862876 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.863852 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.863966 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864055 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864151 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864321 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864423 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864546 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864642 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864749 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864854 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864870 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.864966 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.867339 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.867654 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lmzmr"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.865039 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.865080 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.865078 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.865106 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.865110 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.865138 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.865504 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.865801 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.867127 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.868730 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.868774 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.868809 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.868738 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.867210 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.867319 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.867171 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.869009 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.872145 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.872427 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.873017 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.873547 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.873700 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.873845 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.874024 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.888254 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.888475 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.889112 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.889303 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.901487 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.902246 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.909368 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jtkrh"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.928295 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.928498 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.928568 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.930283 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.934156 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.935539 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.935675 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.935853 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.936204 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.936509 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.937127 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8b86l"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.937454 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-flvbk"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.937807 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.938640 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.938850 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.943381 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.945174 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.945645 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tcvq5"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.946291 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.946477 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.946671 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.948996 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.949048 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950387 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-audit-dir\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950471 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-audit-dir\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950502 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79026dfe-f0d3-4167-b85e-b1dc662b9548-auth-proxy-config\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950569 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f680eac-8309-428b-9b5e-f5324aaf426a-images\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950696 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-client-ca\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950742 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-config\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950787 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-serving-cert\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950815 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950846 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950866 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f680eac-8309-428b-9b5e-f5324aaf426a-config\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950883 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-trusted-ca\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950904 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-config\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950943 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj6ss\" (UniqueName: \"kubernetes.io/projected/0ad42fe5-9e02-4d4e-849e-a03f83b4346d-kube-api-access-xj6ss\") pod \"cluster-samples-operator-665b6dd947-p6mf4\" (UID: \"0ad42fe5-9e02-4d4e-849e-a03f83b4346d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950961 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5nwp\" (UniqueName: \"kubernetes.io/projected/33b8eb32-3572-48d9-a322-5ff99b870d99-kube-api-access-t5nwp\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950978 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-dir\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950979 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.950994 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951342 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-serving-cert\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951364 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951384 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7rsr\" (UniqueName: \"kubernetes.io/projected/cbf56a46-431a-40ef-985f-8eb89ee80d70-kube-api-access-g7rsr\") pod \"downloads-7954f5f757-z2gpq\" (UID: \"cbf56a46-431a-40ef-985f-8eb89ee80d70\") " pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951404 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4gzn\" (UniqueName: \"kubernetes.io/projected/b7950539-3e35-4e74-8a69-c2b3c3ba928f-kube-api-access-t4gzn\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951425 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951451 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951469 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951485 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-config\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951499 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-config\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951530 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-node-pullsecrets\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951548 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951566 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-encryption-config\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951596 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f778c361-3570-4a96-b4d1-1ba163ce04b9-serving-cert\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951533 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951962 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.951988 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qgh\" (UniqueName: \"kubernetes.io/projected/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-kube-api-access-46qgh\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.952024 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-oauth-config\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.952153 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-etcd-client\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.952224 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb60680f-a87a-4086-b701-91f89a1d123f-serving-cert\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.952289 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.952307 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33b8eb32-3572-48d9-a322-5ff99b870d99-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.952328 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.952344 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.952387 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-oauth-serving-cert\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.953098 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.958729 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-oauth-serving-cert\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.959211 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.959726 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.960758 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/79026dfe-f0d3-4167-b85e-b1dc662b9548-machine-approver-tls\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.960843 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.960893 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-image-import-ca\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.982308 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.982395 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79026dfe-f0d3-4167-b85e-b1dc662b9548-config\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.982422 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-etcd-serving-ca\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.982457 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7q4t\" (UniqueName: \"kubernetes.io/projected/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-kube-api-access-c7q4t\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.982477 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-config\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.982505 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqbb2\" (UniqueName: \"kubernetes.io/projected/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-kube-api-access-bqbb2\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.982529 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.982879 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.983226 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.987193 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.988002 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-z2gpq"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.988023 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv"] Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.990910 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.992290 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.992750 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:05 crc kubenswrapper[4637]: I1201 14:48:05.993597 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:05.996064 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-encryption-config\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020800 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-service-ca\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020839 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv54r\" (UniqueName: \"kubernetes.io/projected/6462925c-d528-4dd6-a6e1-55563db83168-kube-api-access-sv54r\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020860 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-trusted-ca-bundle\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020883 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b8eb32-3572-48d9-a322-5ff99b870d99-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020906 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-audit-dir\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020921 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-client-ca\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020955 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f680eac-8309-428b-9b5e-f5324aaf426a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020971 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.020990 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgr4c\" (UniqueName: \"kubernetes.io/projected/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-kube-api-access-pgr4c\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021008 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-serving-cert\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021028 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021043 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-etcd-client\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021060 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021081 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-serving-cert\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021096 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2c4\" (UniqueName: \"kubernetes.io/projected/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-kube-api-access-ch2c4\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021112 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64pj4\" (UniqueName: \"kubernetes.io/projected/bb60680f-a87a-4086-b701-91f89a1d123f-kube-api-access-64pj4\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021129 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021146 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021163 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021189 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrnvz\" (UniqueName: \"kubernetes.io/projected/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-kube-api-access-mrnvz\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021206 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-console-config\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021224 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xswtv\" (UniqueName: \"kubernetes.io/projected/6f680eac-8309-428b-9b5e-f5324aaf426a-kube-api-access-xswtv\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021238 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021256 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-audit\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021275 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdzs\" (UniqueName: \"kubernetes.io/projected/f778c361-3570-4a96-b4d1-1ba163ce04b9-kube-api-access-ljdzs\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021312 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-audit-policies\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021329 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-policies\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021346 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxfht\" (UniqueName: \"kubernetes.io/projected/79026dfe-f0d3-4167-b85e-b1dc662b9548-kube-api-access-lxfht\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021363 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad42fe5-9e02-4d4e-849e-a03f83b4346d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p6mf4\" (UID: \"0ad42fe5-9e02-4d4e-849e-a03f83b4346d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.021380 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-serving-cert\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:05.997007 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-oauth-config\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.024821 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-service-ca\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.005584 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.014719 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-encryption-config\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.025806 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-trusted-ca-bundle\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.017498 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.019061 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.026687 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-console-config\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.019122 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.027180 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-57282"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.027613 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nbf7h"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.028976 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-etcd-client\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.029511 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wpnfr"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.029568 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.030011 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.030131 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-audit-policies\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.030347 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.030524 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.030657 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.030706 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-serving-cert\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.031468 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-serving-cert\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.032293 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.032732 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.035771 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.037008 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9rwl6"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.037236 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.037482 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.037741 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.038999 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.039772 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-244ll"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.040084 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.040179 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.040410 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.040553 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.040906 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.042678 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5js8r"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.043391 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.044432 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.045230 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.047967 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.048444 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.048687 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.053658 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.054412 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.055389 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.055756 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.058882 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.059572 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-98z2t"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.059593 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.059687 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.065825 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.067211 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.070386 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvhqq"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.070425 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tcvq5"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.071836 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f6mt7"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.073016 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.073078 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.074976 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8b86l"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.076314 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.077455 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8t9kz"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.078995 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-flvbk"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.080851 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.082998 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lc9z9"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.083631 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.084784 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.085831 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.086462 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wpnfr"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.087903 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.089245 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.090545 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.091638 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-244ll"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.093347 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5js8r"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.099675 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.099725 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f6mt7"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.102421 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.106660 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.107082 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-57282"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.108666 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.109516 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.110622 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pxf52"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.111230 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pxf52" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.112306 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lc9z9"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.113011 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.114659 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.115232 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.116434 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pxf52"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122038 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxfht\" (UniqueName: \"kubernetes.io/projected/79026dfe-f0d3-4167-b85e-b1dc662b9548-kube-api-access-lxfht\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122065 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad42fe5-9e02-4d4e-849e-a03f83b4346d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p6mf4\" (UID: \"0ad42fe5-9e02-4d4e-849e-a03f83b4346d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122085 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79026dfe-f0d3-4167-b85e-b1dc662b9548-auth-proxy-config\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122103 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f680eac-8309-428b-9b5e-f5324aaf426a-images\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122127 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-client-ca\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122145 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b788a05-de9f-4ef6-985d-d9569f4a9860-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122166 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-serving-cert\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122180 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-config\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122196 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122211 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f680eac-8309-428b-9b5e-f5324aaf426a-config\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122224 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-trusted-ca\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122239 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122255 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-ca\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122318 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-config\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122352 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj6ss\" (UniqueName: \"kubernetes.io/projected/0ad42fe5-9e02-4d4e-849e-a03f83b4346d-kube-api-access-xj6ss\") pod \"cluster-samples-operator-665b6dd947-p6mf4\" (UID: \"0ad42fe5-9e02-4d4e-849e-a03f83b4346d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122370 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5nwp\" (UniqueName: \"kubernetes.io/projected/33b8eb32-3572-48d9-a322-5ff99b870d99-kube-api-access-t5nwp\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122388 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-serving-cert\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122405 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122420 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-dir\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122436 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122453 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-client\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122475 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122497 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7rsr\" (UniqueName: \"kubernetes.io/projected/cbf56a46-431a-40ef-985f-8eb89ee80d70-kube-api-access-g7rsr\") pod \"downloads-7954f5f757-z2gpq\" (UID: \"cbf56a46-431a-40ef-985f-8eb89ee80d70\") " pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122518 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gzn\" (UniqueName: \"kubernetes.io/projected/b7950539-3e35-4e74-8a69-c2b3c3ba928f-kube-api-access-t4gzn\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.122620 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123123 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-node-pullsecrets\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123142 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-config\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123159 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-config\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123174 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-encryption-config\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123189 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f778c361-3570-4a96-b4d1-1ba163ce04b9-serving-cert\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123204 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123220 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123242 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-etcd-client\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123259 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb60680f-a87a-4086-b701-91f89a1d123f-serving-cert\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123273 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123289 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33b8eb32-3572-48d9-a322-5ff99b870d99-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123306 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123323 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123341 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-config\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123356 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123374 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/79026dfe-f0d3-4167-b85e-b1dc662b9548-machine-approver-tls\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123375 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f680eac-8309-428b-9b5e-f5324aaf426a-config\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123392 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bc889e-bf57-415e-829b-9f1b91253db0-serving-cert\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123408 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmwm5\" (UniqueName: \"kubernetes.io/projected/0b788a05-de9f-4ef6-985d-d9569f4a9860-kube-api-access-dmwm5\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123426 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-image-import-ca\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123449 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79026dfe-f0d3-4167-b85e-b1dc662b9548-config\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123465 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-etcd-serving-ca\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123513 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7q4t\" (UniqueName: \"kubernetes.io/projected/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-kube-api-access-c7q4t\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123532 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123587 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b788a05-de9f-4ef6-985d-d9569f4a9860-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123609 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-config\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123625 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqbb2\" (UniqueName: \"kubernetes.io/projected/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-kube-api-access-bqbb2\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123643 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123673 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-service-ca\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123691 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b8eb32-3572-48d9-a322-5ff99b870d99-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123707 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-audit-dir\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123721 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-client-ca\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123737 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f680eac-8309-428b-9b5e-f5324aaf426a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123753 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123755 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-client-ca\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123779 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-config\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123814 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgr4c\" (UniqueName: \"kubernetes.io/projected/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-kube-api-access-pgr4c\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123872 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1216bb3b-7fde-42f8-b2f8-7b070dd63690-metrics-tls\") pod \"dns-operator-744455d44c-flvbk\" (UID: \"1216bb3b-7fde-42f8-b2f8-7b070dd63690\") " pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123905 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-serving-cert\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123922 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2g98\" (UniqueName: \"kubernetes.io/projected/1216bb3b-7fde-42f8-b2f8-7b070dd63690-kube-api-access-n2g98\") pod \"dns-operator-744455d44c-flvbk\" (UID: \"1216bb3b-7fde-42f8-b2f8-7b070dd63690\") " pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123950 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123968 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch2c4\" (UniqueName: \"kubernetes.io/projected/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-kube-api-access-ch2c4\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.123984 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64pj4\" (UniqueName: \"kubernetes.io/projected/bb60680f-a87a-4086-b701-91f89a1d123f-kube-api-access-64pj4\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124002 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124020 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124035 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124059 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpstt\" (UniqueName: \"kubernetes.io/projected/e7bc889e-bf57-415e-829b-9f1b91253db0-kube-api-access-xpstt\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124115 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-audit\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124136 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljdzs\" (UniqueName: \"kubernetes.io/projected/f778c361-3570-4a96-b4d1-1ba163ce04b9-kube-api-access-ljdzs\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124152 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xswtv\" (UniqueName: \"kubernetes.io/projected/6f680eac-8309-428b-9b5e-f5324aaf426a-kube-api-access-xswtv\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124168 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124184 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-config\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124201 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-policies\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124474 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124556 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-dir\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.124832 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-policies\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.125707 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-config\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.125767 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-serving-cert\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.126215 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-image-import-ca\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.126706 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79026dfe-f0d3-4167-b85e-b1dc662b9548-config\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.126775 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-serving-cert\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.126899 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.127102 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-etcd-serving-ca\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.127778 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-config\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.128792 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.128883 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.129367 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-serving-cert\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.129555 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b8eb32-3572-48d9-a322-5ff99b870d99-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.129628 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-audit-dir\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.129662 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.129723 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.129993 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.130045 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.130176 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79026dfe-f0d3-4167-b85e-b1dc662b9548-auth-proxy-config\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.130573 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-client-ca\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.130979 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-config\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.131037 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-node-pullsecrets\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.131608 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-audit\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.131819 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-trusted-ca\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.132162 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-config\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.132659 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f680eac-8309-428b-9b5e-f5324aaf426a-images\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.132994 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.134426 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.134587 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.135056 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.135147 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad42fe5-9e02-4d4e-849e-a03f83b4346d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p6mf4\" (UID: \"0ad42fe5-9e02-4d4e-849e-a03f83b4346d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.135406 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb60680f-a87a-4086-b701-91f89a1d123f-serving-cert\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.135436 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-encryption-config\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.135534 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-etcd-client\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.136107 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.136408 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.137257 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.137407 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f778c361-3570-4a96-b4d1-1ba163ce04b9-serving-cert\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.138592 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.138663 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/79026dfe-f0d3-4167-b85e-b1dc662b9548-machine-approver-tls\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.138815 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.139754 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f680eac-8309-428b-9b5e-f5324aaf426a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.139855 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33b8eb32-3572-48d9-a322-5ff99b870d99-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.141027 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.145200 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.165297 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.185679 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.205788 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.224941 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b788a05-de9f-4ef6-985d-d9569f4a9860-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.224982 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-ca\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225015 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-client\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225065 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225081 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-config\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225096 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bc889e-bf57-415e-829b-9f1b91253db0-serving-cert\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225109 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmwm5\" (UniqueName: \"kubernetes.io/projected/0b788a05-de9f-4ef6-985d-d9569f4a9860-kube-api-access-dmwm5\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225135 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225148 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b788a05-de9f-4ef6-985d-d9569f4a9860-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225184 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-service-ca\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225206 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1216bb3b-7fde-42f8-b2f8-7b070dd63690-metrics-tls\") pod \"dns-operator-744455d44c-flvbk\" (UID: \"1216bb3b-7fde-42f8-b2f8-7b070dd63690\") " pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225222 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2g98\" (UniqueName: \"kubernetes.io/projected/1216bb3b-7fde-42f8-b2f8-7b070dd63690-kube-api-access-n2g98\") pod \"dns-operator-744455d44c-flvbk\" (UID: \"1216bb3b-7fde-42f8-b2f8-7b070dd63690\") " pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225256 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpstt\" (UniqueName: \"kubernetes.io/projected/e7bc889e-bf57-415e-829b-9f1b91253db0-kube-api-access-xpstt\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225284 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-config\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225602 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b788a05-de9f-4ef6-985d-d9569f4a9860-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.225802 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.226750 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-service-ca\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.226863 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-config\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.227299 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-ca\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.228359 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7bc889e-bf57-415e-829b-9f1b91253db0-etcd-client\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.228805 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1216bb3b-7fde-42f8-b2f8-7b070dd63690-metrics-tls\") pod \"dns-operator-744455d44c-flvbk\" (UID: \"1216bb3b-7fde-42f8-b2f8-7b070dd63690\") " pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.229177 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bc889e-bf57-415e-829b-9f1b91253db0-serving-cert\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.230285 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b788a05-de9f-4ef6-985d-d9569f4a9860-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.245131 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.265505 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.285529 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.305245 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.309840 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.325665 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.336627 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-config\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.364828 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.386048 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.405245 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.425170 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.458636 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qgh\" (UniqueName: \"kubernetes.io/projected/795c57e9-15ad-4f5d-9e6a-b4df4bf13d53-kube-api-access-46qgh\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsmrs\" (UID: \"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.485381 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.506036 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.526119 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.545562 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.566012 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.585036 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.600971 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.605520 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.626393 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.651483 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.668822 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.721483 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrnvz\" (UniqueName: \"kubernetes.io/projected/8720c46c-d2b9-4f4e-8ca0-379ff5e30923-kube-api-access-mrnvz\") pod \"apiserver-7bbb656c7d-mdrrl\" (UID: \"8720c46c-d2b9-4f4e-8ca0-379ff5e30923\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.728645 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.729962 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv54r\" (UniqueName: \"kubernetes.io/projected/6462925c-d528-4dd6-a6e1-55563db83168-kube-api-access-sv54r\") pod \"console-f9d7485db-98z2t\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.745706 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.765897 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.770321 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.770348 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.770427 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.770552 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.787367 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.789452 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs"] Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.806256 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.825908 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.845959 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.865208 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.885791 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.905998 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.925788 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.946162 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.965189 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.970623 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.983673 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:06 crc kubenswrapper[4637]: I1201 14:48:06.985189 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.005883 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.033370 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.044260 4637 request.go:700] Waited for 1.012391982s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.046210 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.066434 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.085816 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.105483 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.126508 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.144803 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl"] Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.145689 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.165533 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.185450 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.186554 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-98z2t"] Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.205695 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.225626 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.245556 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.266649 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.286486 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.306016 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.330497 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.346000 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.365086 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.386603 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.399085 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" event={"ID":"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53","Type":"ContainerStarted","Data":"0528e687b9c1694cb9fa835e0c38f8199e213180ffa16789b268b1f8bef3cd8e"} Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.400246 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" event={"ID":"8720c46c-d2b9-4f4e-8ca0-379ff5e30923","Type":"ContainerStarted","Data":"5082269959b1d97709307694365c9ce6fed5f850fc2ac0f6e71136ca5c932ce3"} Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.409230 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.426646 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.445710 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.466586 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.506774 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.525312 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.544327 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.544352 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.565702 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.585569 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.606308 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.625735 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.645603 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.666439 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.684973 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.706614 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.726535 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.746292 4637 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.766146 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.785818 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.805784 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.826263 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.847230 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.865889 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.885616 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.905881 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.925850 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.965130 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj6ss\" (UniqueName: \"kubernetes.io/projected/0ad42fe5-9e02-4d4e-849e-a03f83b4346d-kube-api-access-xj6ss\") pod \"cluster-samples-operator-665b6dd947-p6mf4\" (UID: \"0ad42fe5-9e02-4d4e-849e-a03f83b4346d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" Dec 01 14:48:07 crc kubenswrapper[4637]: I1201 14:48:07.984610 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxfht\" (UniqueName: \"kubernetes.io/projected/79026dfe-f0d3-4167-b85e-b1dc662b9548-kube-api-access-lxfht\") pod \"machine-approver-56656f9798-bwdtf\" (UID: \"79026dfe-f0d3-4167-b85e-b1dc662b9548\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.002762 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5nwp\" (UniqueName: \"kubernetes.io/projected/33b8eb32-3572-48d9-a322-5ff99b870d99-kube-api-access-t5nwp\") pod \"openshift-apiserver-operator-796bbdcf4f-j4nw2\" (UID: \"33b8eb32-3572-48d9-a322-5ff99b870d99\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.026105 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.031705 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7q4t\" (UniqueName: \"kubernetes.io/projected/c7dfebc2-dcf5-480c-81fa-534f5f0b739e-kube-api-access-c7q4t\") pod \"console-operator-58897d9998-zllnr\" (UID: \"c7dfebc2-dcf5-480c-81fa-534f5f0b739e\") " pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.042220 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqbb2\" (UniqueName: \"kubernetes.io/projected/2d8f1b03-2eb3-4a9a-9d89-860b3efce88b-kube-api-access-bqbb2\") pod \"authentication-operator-69f744f599-8t9kz\" (UID: \"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.044337 4637 request.go:700] Waited for 1.914548067s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.059750 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.067235 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7rsr\" (UniqueName: \"kubernetes.io/projected/cbf56a46-431a-40ef-985f-8eb89ee80d70-kube-api-access-g7rsr\") pod \"downloads-7954f5f757-z2gpq\" (UID: \"cbf56a46-431a-40ef-985f-8eb89ee80d70\") " pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.083543 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gzn\" (UniqueName: \"kubernetes.io/projected/b7950539-3e35-4e74-8a69-c2b3c3ba928f-kube-api-access-t4gzn\") pod \"oauth-openshift-558db77b4-kvhqq\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.103027 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch2c4\" (UniqueName: \"kubernetes.io/projected/6a441cc0-e2d3-4572-a5f1-2ed8420bdced-kube-api-access-ch2c4\") pod \"apiserver-76f77b778f-lmzmr\" (UID: \"6a441cc0-e2d3-4572-a5f1-2ed8420bdced\") " pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.123248 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64pj4\" (UniqueName: \"kubernetes.io/projected/bb60680f-a87a-4086-b701-91f89a1d123f-kube-api-access-64pj4\") pod \"controller-manager-879f6c89f-8rphr\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.146716 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljdzs\" (UniqueName: \"kubernetes.io/projected/f778c361-3570-4a96-b4d1-1ba163ce04b9-kube-api-access-ljdzs\") pod \"route-controller-manager-6576b87f9c-htdcl\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.152757 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.160913 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xswtv\" (UniqueName: \"kubernetes.io/projected/6f680eac-8309-428b-9b5e-f5324aaf426a-kube-api-access-xswtv\") pod \"machine-api-operator-5694c8668f-jtkrh\" (UID: \"6f680eac-8309-428b-9b5e-f5324aaf426a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.164980 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.184785 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.190409 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.198544 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgr4c\" (UniqueName: \"kubernetes.io/projected/c77fa87b-60af-46d6-a0e1-ef83ee35ba3f-kube-api-access-pgr4c\") pod \"cluster-image-registry-operator-dc59b4c8b-6cwvd\" (UID: \"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.209386 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.210355 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.225796 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff0ed963-2cbb-46cd-9f0d-33a19a29c99f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-btm95\" (UID: \"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.240181 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.249083 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.249628 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmwm5\" (UniqueName: \"kubernetes.io/projected/0b788a05-de9f-4ef6-985d-d9569f4a9860-kube-api-access-dmwm5\") pod \"openshift-config-operator-7777fb866f-z4qtk\" (UID: \"0b788a05-de9f-4ef6-985d-d9569f4a9860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.262277 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.268391 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2g98\" (UniqueName: \"kubernetes.io/projected/1216bb3b-7fde-42f8-b2f8-7b070dd63690-kube-api-access-n2g98\") pod \"dns-operator-744455d44c-flvbk\" (UID: \"1216bb3b-7fde-42f8-b2f8-7b070dd63690\") " pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.280807 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.283267 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpstt\" (UniqueName: \"kubernetes.io/projected/e7bc889e-bf57-415e-829b-9f1b91253db0-kube-api-access-xpstt\") pod \"etcd-operator-b45778765-tcvq5\" (UID: \"e7bc889e-bf57-415e-829b-9f1b91253db0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.298944 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zllnr"] Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.325881 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.340041 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.345838 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366032 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-certificates\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366090 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-tls\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366122 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-trusted-ca\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366154 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366182 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366205 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366243 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-bound-sa-token\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366261 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28km5\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-kube-api-access-28km5\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.366546 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 14:48:08 crc kubenswrapper[4637]: E1201 14:48:08.367315 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:08.867297138 +0000 UTC m=+139.385005966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.393341 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.417542 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.421442 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvhqq"] Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.430320 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 14:48:08 crc kubenswrapper[4637]: W1201 14:48:08.464798 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7950539_3e35_4e74_8a69_c2b3c3ba928f.slice/crio-4096694e623c2ce7a7db3c7651342cea4f694680716d3e885d64dfa3cd20708f WatchSource:0}: Error finding container 4096694e623c2ce7a7db3c7651342cea4f694680716d3e885d64dfa3cd20708f: Status 404 returned error can't find the container with id 4096694e623c2ce7a7db3c7651342cea4f694680716d3e885d64dfa3cd20708f Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.466572 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zllnr" event={"ID":"c7dfebc2-dcf5-480c-81fa-534f5f0b739e","Type":"ContainerStarted","Data":"b6a141c5a9ee08bb1d4e2cfb72cd34c45a116d40c560e29fab38c22eaf13c2e3"} Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467117 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467414 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b473b5ee-cb81-4f88-b995-76895c4462a8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467526 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-config\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467553 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-signing-cabundle\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467581 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7n2k\" (UniqueName: \"kubernetes.io/projected/23aa9f67-7343-49fc-884c-a48fef29649b-kube-api-access-g7n2k\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467608 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467672 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467718 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ee48f7d-1905-49a4-b201-4a20be970a40-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467743 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-socket-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467773 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-mountpoint-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467794 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-bound-sa-token\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467844 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63c18ab1-c963-4bed-a0b5-7d0873eebbac-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467874 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08cd040f-3760-4400-a689-2d3806f22d94-profile-collector-cert\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467960 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58sp9\" (UniqueName: \"kubernetes.io/projected/067f5196-983d-4c49-a194-68357dfb4963-kube-api-access-58sp9\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.467989 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwvf\" (UniqueName: \"kubernetes.io/projected/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-kube-api-access-nmwvf\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468028 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28km5\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-kube-api-access-28km5\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468050 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ccf\" (UniqueName: \"kubernetes.io/projected/08cd040f-3760-4400-a689-2d3806f22d94-kube-api-access-x6ccf\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468071 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-metrics-tls\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468131 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-csi-data-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468202 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/23aa9f67-7343-49fc-884c-a48fef29649b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: E1201 14:48:08.468342 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:08.968322853 +0000 UTC m=+139.486031681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468647 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-certificates\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468673 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnw66\" (UniqueName: \"kubernetes.io/projected/8ee48f7d-1905-49a4-b201-4a20be970a40-kube-api-access-gnw66\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468711 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhns\" (UniqueName: \"kubernetes.io/projected/f705cd15-60f5-48ff-abc9-4a468bb32285-kube-api-access-gdhns\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468734 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-config-volume\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468772 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c18ab1-c963-4bed-a0b5-7d0873eebbac-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468810 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ecc491-496e-4bcf-bba9-ff48b415c10a-config\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468833 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gjz\" (UniqueName: \"kubernetes.io/projected/224a5686-f4fa-4f6b-b397-9a39415f6cf0-kube-api-access-v4gjz\") pod \"ingress-canary-pxf52\" (UID: \"224a5686-f4fa-4f6b-b397-9a39415f6cf0\") " pod="openshift-ingress-canary/ingress-canary-pxf52" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468869 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh5mq\" (UniqueName: \"kubernetes.io/projected/03fa0315-7dd7-466c-a22b-3597d724a281-kube-api-access-bh5mq\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468887 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-serving-cert\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468952 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-metrics-certs\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.468979 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-tls\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469023 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02ecc491-496e-4bcf-bba9-ff48b415c10a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469042 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkbv\" (UniqueName: \"kubernetes.io/projected/369bf28c-11f9-494a-8a91-a11e861d84e0-kube-api-access-phkbv\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6sz5\" (UID: \"369bf28c-11f9-494a-8a91-a11e861d84e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469058 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/242bd093-da03-4995-9b63-e5cbc6c40650-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5js8r\" (UID: \"242bd093-da03-4995-9b63-e5cbc6c40650\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469073 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03fa0315-7dd7-466c-a22b-3597d724a281-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469091 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-plugins-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469136 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03fa0315-7dd7-466c-a22b-3597d724a281-proxy-tls\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469150 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tztj\" (UniqueName: \"kubernetes.io/projected/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-kube-api-access-8tztj\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469177 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469204 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60417f89-e1fc-48aa-ae2c-1a28adda9b65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469220 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-secret-volume\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469247 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwpss\" (UniqueName: \"kubernetes.io/projected/750daab9-bb17-49c4-9db0-cae26692ded5-kube-api-access-lwpss\") pod \"migrator-59844c95c7-cx6wb\" (UID: \"750daab9-bb17-49c4-9db0-cae26692ded5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469261 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-registration-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469286 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4299p\" (UniqueName: \"kubernetes.io/projected/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-kube-api-access-4299p\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469303 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78924479-be14-4e3f-88fb-0fe9f58adc11-webhook-cert\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469715 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k54j\" (UniqueName: \"kubernetes.io/projected/34c0705d-7431-485f-b393-d8d4ebd53098-kube-api-access-2k54j\") pod \"package-server-manager-789f6589d5-rdmp7\" (UID: \"34c0705d-7431-485f-b393-d8d4ebd53098\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469802 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2492\" (UniqueName: \"kubernetes.io/projected/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-kube-api-access-n2492\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469827 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469867 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ssw\" (UniqueName: \"kubernetes.io/projected/242bd093-da03-4995-9b63-e5cbc6c40650-kube-api-access-t5ssw\") pod \"multus-admission-controller-857f4d67dd-5js8r\" (UID: \"242bd093-da03-4995-9b63-e5cbc6c40650\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.469898 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/369bf28c-11f9-494a-8a91-a11e861d84e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6sz5\" (UID: \"369bf28c-11f9-494a-8a91-a11e861d84e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.470675 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f705cd15-60f5-48ff-abc9-4a468bb32285-certs\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.470706 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/23aa9f67-7343-49fc-884c-a48fef29649b-srv-cert\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.470742 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-default-certificate\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.470776 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34c0705d-7431-485f-b393-d8d4ebd53098-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rdmp7\" (UID: \"34c0705d-7431-485f-b393-d8d4ebd53098\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.470794 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/78924479-be14-4e3f-88fb-0fe9f58adc11-tmpfs\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.470839 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56jg\" (UniqueName: \"kubernetes.io/projected/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-kube-api-access-d56jg\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.470879 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ee48f7d-1905-49a4-b201-4a20be970a40-images\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.470908 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nslsx\" (UniqueName: \"kubernetes.io/projected/b473b5ee-cb81-4f88-b995-76895c4462a8-kube-api-access-nslsx\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471078 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-stats-auth\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471099 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-config-volume\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471139 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78924479-be14-4e3f-88fb-0fe9f58adc11-apiservice-cert\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471177 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08cd040f-3760-4400-a689-2d3806f22d94-srv-cert\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471207 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j89cm\" (UniqueName: \"kubernetes.io/projected/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-kube-api-access-j89cm\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471222 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b473b5ee-cb81-4f88-b995-76895c4462a8-metrics-tls\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471242 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tmn\" (UniqueName: \"kubernetes.io/projected/60417f89-e1fc-48aa-ae2c-1a28adda9b65-kube-api-access-r6tmn\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471257 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b473b5ee-cb81-4f88-b995-76895c4462a8-trusted-ca\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471277 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-trusted-ca\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471314 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/067f5196-983d-4c49-a194-68357dfb4963-service-ca-bundle\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471342 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/224a5686-f4fa-4f6b-b397-9a39415f6cf0-cert\") pod \"ingress-canary-pxf52\" (UID: \"224a5686-f4fa-4f6b-b397-9a39415f6cf0\") " pod="openshift-ingress-canary/ingress-canary-pxf52" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471368 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c18ab1-c963-4bed-a0b5-7d0873eebbac-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471383 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f705cd15-60f5-48ff-abc9-4a468bb32285-node-bootstrap-token\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471430 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjml7\" (UniqueName: \"kubernetes.io/projected/78924479-be14-4e3f-88fb-0fe9f58adc11-kube-api-access-pjml7\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471462 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ee48f7d-1905-49a4-b201-4a20be970a40-proxy-tls\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471500 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60417f89-e1fc-48aa-ae2c-1a28adda9b65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471516 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ecc491-496e-4bcf-bba9-ff48b415c10a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.471533 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-signing-key\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.473387 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-trusted-ca\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.474594 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.476194 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-98z2t" event={"ID":"6462925c-d528-4dd6-a6e1-55563db83168","Type":"ContainerStarted","Data":"b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79"} Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.476236 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-98z2t" event={"ID":"6462925c-d528-4dd6-a6e1-55563db83168","Type":"ContainerStarted","Data":"b7a5f1dff888a91716290436831671ce203848f808d1ec04a544e469ae1923be"} Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.477181 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.481309 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-tls\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.485340 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.486014 4637 generic.go:334] "Generic (PLEG): container finished" podID="8720c46c-d2b9-4f4e-8ca0-379ff5e30923" containerID="d14adb80ddff2643d98a38ecdc4cb19116b059ddad669da2d7fdd63cf8d6ccd3" exitCode=0 Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.486156 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" event={"ID":"8720c46c-d2b9-4f4e-8ca0-379ff5e30923","Type":"ContainerDied","Data":"d14adb80ddff2643d98a38ecdc4cb19116b059ddad669da2d7fdd63cf8d6ccd3"} Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.503159 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" event={"ID":"795c57e9-15ad-4f5d-9e6a-b4df4bf13d53","Type":"ContainerStarted","Data":"b0cf9db81239556c82c1f334462dbbe8762a1dca74145d8eb76269365dc53729"} Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.504639 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-bound-sa-token\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.505037 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" event={"ID":"79026dfe-f0d3-4167-b85e-b1dc662b9548","Type":"ContainerStarted","Data":"ef6b081e1d65b36b66595297b28bce679ef2c3ac8ba03d22b22298bf9e16a19d"} Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.515374 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.522716 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.525550 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28km5\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-kube-api-access-28km5\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: W1201 14:48:08.529149 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf56a46_431a_40ef_985f_8eb89ee80d70.slice/crio-f272e9ba615de85d052fcd911a68a693f6c150b67f4ceb9e701cfdb0a86aea7d WatchSource:0}: Error finding container f272e9ba615de85d052fcd911a68a693f6c150b67f4ceb9e701cfdb0a86aea7d: Status 404 returned error can't find the container with id f272e9ba615de85d052fcd911a68a693f6c150b67f4ceb9e701cfdb0a86aea7d Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.538462 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-z2gpq"] Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.542636 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.558845 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4"] Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572666 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/23aa9f67-7343-49fc-884c-a48fef29649b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572714 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnw66\" (UniqueName: \"kubernetes.io/projected/8ee48f7d-1905-49a4-b201-4a20be970a40-kube-api-access-gnw66\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572751 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdhns\" (UniqueName: \"kubernetes.io/projected/f705cd15-60f5-48ff-abc9-4a468bb32285-kube-api-access-gdhns\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572777 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c18ab1-c963-4bed-a0b5-7d0873eebbac-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572800 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ecc491-496e-4bcf-bba9-ff48b415c10a-config\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572820 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-config-volume\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572842 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4gjz\" (UniqueName: \"kubernetes.io/projected/224a5686-f4fa-4f6b-b397-9a39415f6cf0-kube-api-access-v4gjz\") pod \"ingress-canary-pxf52\" (UID: \"224a5686-f4fa-4f6b-b397-9a39415f6cf0\") " pod="openshift-ingress-canary/ingress-canary-pxf52" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572879 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh5mq\" (UniqueName: \"kubernetes.io/projected/03fa0315-7dd7-466c-a22b-3597d724a281-kube-api-access-bh5mq\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572902 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-metrics-certs\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572921 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-serving-cert\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.572992 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02ecc491-496e-4bcf-bba9-ff48b415c10a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.573016 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkbv\" (UniqueName: \"kubernetes.io/projected/369bf28c-11f9-494a-8a91-a11e861d84e0-kube-api-access-phkbv\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6sz5\" (UID: \"369bf28c-11f9-494a-8a91-a11e861d84e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.573037 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/242bd093-da03-4995-9b63-e5cbc6c40650-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5js8r\" (UID: \"242bd093-da03-4995-9b63-e5cbc6c40650\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.573059 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03fa0315-7dd7-466c-a22b-3597d724a281-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.573080 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-plugins-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.574361 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-plugins-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.574447 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c18ab1-c963-4bed-a0b5-7d0873eebbac-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.574681 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03fa0315-7dd7-466c-a22b-3597d724a281-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.574749 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03fa0315-7dd7-466c-a22b-3597d724a281-proxy-tls\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.574766 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tztj\" (UniqueName: \"kubernetes.io/projected/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-kube-api-access-8tztj\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.574763 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ecc491-496e-4bcf-bba9-ff48b415c10a-config\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575091 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60417f89-e1fc-48aa-ae2c-1a28adda9b65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575111 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-secret-volume\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575126 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwpss\" (UniqueName: \"kubernetes.io/projected/750daab9-bb17-49c4-9db0-cae26692ded5-kube-api-access-lwpss\") pod \"migrator-59844c95c7-cx6wb\" (UID: \"750daab9-bb17-49c4-9db0-cae26692ded5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575142 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-registration-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575166 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4299p\" (UniqueName: \"kubernetes.io/projected/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-kube-api-access-4299p\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575184 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78924479-be14-4e3f-88fb-0fe9f58adc11-webhook-cert\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575240 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k54j\" (UniqueName: \"kubernetes.io/projected/34c0705d-7431-485f-b393-d8d4ebd53098-kube-api-access-2k54j\") pod \"package-server-manager-789f6589d5-rdmp7\" (UID: \"34c0705d-7431-485f-b393-d8d4ebd53098\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575270 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2492\" (UniqueName: \"kubernetes.io/projected/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-kube-api-access-n2492\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575285 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575307 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ssw\" (UniqueName: \"kubernetes.io/projected/242bd093-da03-4995-9b63-e5cbc6c40650-kube-api-access-t5ssw\") pod \"multus-admission-controller-857f4d67dd-5js8r\" (UID: \"242bd093-da03-4995-9b63-e5cbc6c40650\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575325 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/369bf28c-11f9-494a-8a91-a11e861d84e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6sz5\" (UID: \"369bf28c-11f9-494a-8a91-a11e861d84e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575344 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f705cd15-60f5-48ff-abc9-4a468bb32285-certs\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575358 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/23aa9f67-7343-49fc-884c-a48fef29649b-srv-cert\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575375 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-default-certificate\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575392 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34c0705d-7431-485f-b393-d8d4ebd53098-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rdmp7\" (UID: \"34c0705d-7431-485f-b393-d8d4ebd53098\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575425 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/78924479-be14-4e3f-88fb-0fe9f58adc11-tmpfs\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575450 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56jg\" (UniqueName: \"kubernetes.io/projected/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-kube-api-access-d56jg\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575467 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ee48f7d-1905-49a4-b201-4a20be970a40-images\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575483 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nslsx\" (UniqueName: \"kubernetes.io/projected/b473b5ee-cb81-4f88-b995-76895c4462a8-kube-api-access-nslsx\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575498 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-stats-auth\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575513 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-config-volume\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575530 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78924479-be14-4e3f-88fb-0fe9f58adc11-apiservice-cert\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575561 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08cd040f-3760-4400-a689-2d3806f22d94-srv-cert\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575585 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j89cm\" (UniqueName: \"kubernetes.io/projected/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-kube-api-access-j89cm\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575601 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b473b5ee-cb81-4f88-b995-76895c4462a8-metrics-tls\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575625 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6tmn\" (UniqueName: \"kubernetes.io/projected/60417f89-e1fc-48aa-ae2c-1a28adda9b65-kube-api-access-r6tmn\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575640 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b473b5ee-cb81-4f88-b995-76895c4462a8-trusted-ca\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575670 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/067f5196-983d-4c49-a194-68357dfb4963-service-ca-bundle\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575702 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/224a5686-f4fa-4f6b-b397-9a39415f6cf0-cert\") pod \"ingress-canary-pxf52\" (UID: \"224a5686-f4fa-4f6b-b397-9a39415f6cf0\") " pod="openshift-ingress-canary/ingress-canary-pxf52" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575724 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c18ab1-c963-4bed-a0b5-7d0873eebbac-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575742 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f705cd15-60f5-48ff-abc9-4a468bb32285-node-bootstrap-token\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575759 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjml7\" (UniqueName: \"kubernetes.io/projected/78924479-be14-4e3f-88fb-0fe9f58adc11-kube-api-access-pjml7\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575775 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ee48f7d-1905-49a4-b201-4a20be970a40-proxy-tls\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.575976 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60417f89-e1fc-48aa-ae2c-1a28adda9b65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576000 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ecc491-496e-4bcf-bba9-ff48b415c10a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576022 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-signing-key\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576043 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b473b5ee-cb81-4f88-b995-76895c4462a8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576069 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576100 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-config\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576120 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-signing-cabundle\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576141 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7n2k\" (UniqueName: \"kubernetes.io/projected/23aa9f67-7343-49fc-884c-a48fef29649b-kube-api-access-g7n2k\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576972 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576999 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-config-volume\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576999 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ee48f7d-1905-49a4-b201-4a20be970a40-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577052 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-socket-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577083 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-mountpoint-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577112 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63c18ab1-c963-4bed-a0b5-7d0873eebbac-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577135 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08cd040f-3760-4400-a689-2d3806f22d94-profile-collector-cert\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577156 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58sp9\" (UniqueName: \"kubernetes.io/projected/067f5196-983d-4c49-a194-68357dfb4963-kube-api-access-58sp9\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577173 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmwvf\" (UniqueName: \"kubernetes.io/projected/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-kube-api-access-nmwvf\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577192 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6ccf\" (UniqueName: \"kubernetes.io/projected/08cd040f-3760-4400-a689-2d3806f22d94-kube-api-access-x6ccf\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577209 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-metrics-tls\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577224 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-csi-data-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577349 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-csi-data-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.576464 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-registration-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577392 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-socket-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.577425 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-mountpoint-dir\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: E1201 14:48:08.578434 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.078421616 +0000 UTC m=+139.596130444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.580152 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60417f89-e1fc-48aa-ae2c-1a28adda9b65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.580447 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-config-volume\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.580722 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-serving-cert\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.583462 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/23aa9f67-7343-49fc-884c-a48fef29649b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.587026 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-config\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.588644 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ecc491-496e-4bcf-bba9-ff48b415c10a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.589619 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08cd040f-3760-4400-a689-2d3806f22d94-srv-cert\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.590067 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/78924479-be14-4e3f-88fb-0fe9f58adc11-tmpfs\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.590303 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f705cd15-60f5-48ff-abc9-4a468bb32285-certs\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.594174 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b473b5ee-cb81-4f88-b995-76895c4462a8-trusted-ca\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.594912 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ee48f7d-1905-49a4-b201-4a20be970a40-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.595784 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-signing-cabundle\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.596847 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ee48f7d-1905-49a4-b201-4a20be970a40-images\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.597010 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-secret-volume\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.597403 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/242bd093-da03-4995-9b63-e5cbc6c40650-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5js8r\" (UID: \"242bd093-da03-4995-9b63-e5cbc6c40650\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.603292 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03fa0315-7dd7-466c-a22b-3597d724a281-proxy-tls\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.614679 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/067f5196-983d-4c49-a194-68357dfb4963-service-ca-bundle\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.615063 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-metrics-certs\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.615135 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f705cd15-60f5-48ff-abc9-4a468bb32285-node-bootstrap-token\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.615256 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.615381 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c18ab1-c963-4bed-a0b5-7d0873eebbac-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.615596 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78924479-be14-4e3f-88fb-0fe9f58adc11-webhook-cert\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.615662 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-certificates\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.616059 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-default-certificate\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.616816 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78924479-be14-4e3f-88fb-0fe9f58adc11-apiservice-cert\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.622286 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/224a5686-f4fa-4f6b-b397-9a39415f6cf0-cert\") pod \"ingress-canary-pxf52\" (UID: \"224a5686-f4fa-4f6b-b397-9a39415f6cf0\") " pod="openshift-ingress-canary/ingress-canary-pxf52" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.622812 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-signing-key\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.622983 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08cd040f-3760-4400-a689-2d3806f22d94-profile-collector-cert\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.623276 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/23aa9f67-7343-49fc-884c-a48fef29649b-srv-cert\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.623400 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60417f89-e1fc-48aa-ae2c-1a28adda9b65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.624148 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ee48f7d-1905-49a4-b201-4a20be970a40-proxy-tls\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.625585 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/369bf28c-11f9-494a-8a91-a11e861d84e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6sz5\" (UID: \"369bf28c-11f9-494a-8a91-a11e861d84e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.626210 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b473b5ee-cb81-4f88-b995-76895c4462a8-metrics-tls\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.632856 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/067f5196-983d-4c49-a194-68357dfb4963-stats-auth\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.644162 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-metrics-tls\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.644303 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34c0705d-7431-485f-b393-d8d4ebd53098-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rdmp7\" (UID: \"34c0705d-7431-485f-b393-d8d4ebd53098\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.644633 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.691102 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:08 crc kubenswrapper[4637]: E1201 14:48:08.691662 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.191641045 +0000 UTC m=+139.709349873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.691870 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: E1201 14:48:08.692189 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.192179441 +0000 UTC m=+139.709888269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.701665 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4gjz\" (UniqueName: \"kubernetes.io/projected/224a5686-f4fa-4f6b-b397-9a39415f6cf0-kube-api-access-v4gjz\") pod \"ingress-canary-pxf52\" (UID: \"224a5686-f4fa-4f6b-b397-9a39415f6cf0\") " pod="openshift-ingress-canary/ingress-canary-pxf52" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.706095 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkbv\" (UniqueName: \"kubernetes.io/projected/369bf28c-11f9-494a-8a91-a11e861d84e0-kube-api-access-phkbv\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6sz5\" (UID: \"369bf28c-11f9-494a-8a91-a11e861d84e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.711804 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh5mq\" (UniqueName: \"kubernetes.io/projected/03fa0315-7dd7-466c-a22b-3597d724a281-kube-api-access-bh5mq\") pod \"machine-config-controller-84d6567774-57282\" (UID: \"03fa0315-7dd7-466c-a22b-3597d724a281\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.719785 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnw66\" (UniqueName: \"kubernetes.io/projected/8ee48f7d-1905-49a4-b201-4a20be970a40-kube-api-access-gnw66\") pod \"machine-config-operator-74547568cd-c46sl\" (UID: \"8ee48f7d-1905-49a4-b201-4a20be970a40\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.720122 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02ecc491-496e-4bcf-bba9-ff48b415c10a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cnchp\" (UID: \"02ecc491-496e-4bcf-bba9-ff48b415c10a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.742709 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdhns\" (UniqueName: \"kubernetes.io/projected/f705cd15-60f5-48ff-abc9-4a468bb32285-kube-api-access-gdhns\") pod \"machine-config-server-9rwl6\" (UID: \"f705cd15-60f5-48ff-abc9-4a468bb32285\") " pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.743072 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pxf52" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.752250 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tztj\" (UniqueName: \"kubernetes.io/projected/f7c1aabd-a992-44af-8ace-e8f61ef4c55f-kube-api-access-8tztj\") pod \"dns-default-lc9z9\" (UID: \"f7c1aabd-a992-44af-8ace-e8f61ef4c55f\") " pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.782487 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwpss\" (UniqueName: \"kubernetes.io/projected/750daab9-bb17-49c4-9db0-cae26692ded5-kube-api-access-lwpss\") pod \"migrator-59844c95c7-cx6wb\" (UID: \"750daab9-bb17-49c4-9db0-cae26692ded5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.792249 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:08 crc kubenswrapper[4637]: E1201 14:48:08.792648 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.292633359 +0000 UTC m=+139.810342187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.794150 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4299p\" (UniqueName: \"kubernetes.io/projected/6a8d90d8-8039-4557-9bc7-00e63e0cb5d3-kube-api-access-4299p\") pod \"service-ca-operator-777779d784-gqjlb\" (UID: \"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.814488 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7n2k\" (UniqueName: \"kubernetes.io/projected/23aa9f67-7343-49fc-884c-a48fef29649b-kube-api-access-g7n2k\") pod \"olm-operator-6b444d44fb-dhnkh\" (UID: \"23aa9f67-7343-49fc-884c-a48fef29649b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.850965 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63c18ab1-c963-4bed-a0b5-7d0873eebbac-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ncnx\" (UID: \"63c18ab1-c963-4bed-a0b5-7d0873eebbac\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.866116 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl"] Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.879347 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.882327 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2492\" (UniqueName: \"kubernetes.io/projected/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-kube-api-access-n2492\") pod \"collect-profiles-29410005-5pnzr\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.885185 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.887694 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k54j\" (UniqueName: \"kubernetes.io/projected/34c0705d-7431-485f-b393-d8d4ebd53098-kube-api-access-2k54j\") pod \"package-server-manager-789f6589d5-rdmp7\" (UID: \"34c0705d-7431-485f-b393-d8d4ebd53098\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.892788 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.895445 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:08 crc kubenswrapper[4637]: E1201 14:48:08.895734 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.395722808 +0000 UTC m=+139.913431636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.907186 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.914007 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58sp9\" (UniqueName: \"kubernetes.io/projected/067f5196-983d-4c49-a194-68357dfb4963-kube-api-access-58sp9\") pod \"router-default-5444994796-nbf7h\" (UID: \"067f5196-983d-4c49-a194-68357dfb4963\") " pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.914408 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmwvf\" (UniqueName: \"kubernetes.io/projected/b33b7298-4521-49ef-a7bd-5a7c94e9aed7-kube-api-access-nmwvf\") pod \"csi-hostpathplugin-f6mt7\" (UID: \"b33b7298-4521-49ef-a7bd-5a7c94e9aed7\") " pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.914535 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9rwl6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.920893 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.930692 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jtkrh"] Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.937786 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.948108 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.948652 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjml7\" (UniqueName: \"kubernetes.io/projected/78924479-be14-4e3f-88fb-0fe9f58adc11-kube-api-access-pjml7\") pod \"packageserver-d55dfcdfc-wvhc8\" (UID: \"78924479-be14-4e3f-88fb-0fe9f58adc11\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.958693 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.964869 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.976273 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6ccf\" (UniqueName: \"kubernetes.io/projected/08cd040f-3760-4400-a689-2d3806f22d94-kube-api-access-x6ccf\") pod \"catalog-operator-68c6474976-hj7gt\" (UID: \"08cd040f-3760-4400-a689-2d3806f22d94\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.976713 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.984622 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b473b5ee-cb81-4f88-b995-76895c4462a8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.985962 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:08 crc kubenswrapper[4637]: I1201 14:48:08.992641 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j89cm\" (UniqueName: \"kubernetes.io/projected/cc88f839-deb8-4d34-bfb6-c8da6a9087d6-kube-api-access-j89cm\") pod \"service-ca-9c57cc56f-wpnfr\" (UID: \"cc88f839-deb8-4d34-bfb6-c8da6a9087d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.003414 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.003984 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.004272 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.504258043 +0000 UTC m=+140.021966871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.028364 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.035286 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.050458 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ssw\" (UniqueName: \"kubernetes.io/projected/242bd093-da03-4995-9b63-e5cbc6c40650-kube-api-access-t5ssw\") pod \"multus-admission-controller-857f4d67dd-5js8r\" (UID: \"242bd093-da03-4995-9b63-e5cbc6c40650\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.065656 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56jg\" (UniqueName: \"kubernetes.io/projected/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-kube-api-access-d56jg\") pod \"marketplace-operator-79b997595-244ll\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.069641 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6tmn\" (UniqueName: \"kubernetes.io/projected/60417f89-e1fc-48aa-ae2c-1a28adda9b65-kube-api-access-r6tmn\") pod \"kube-storage-version-migrator-operator-b67b599dd-f4jrv\" (UID: \"60417f89-e1fc-48aa-ae2c-1a28adda9b65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.098035 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nslsx\" (UniqueName: \"kubernetes.io/projected/b473b5ee-cb81-4f88-b995-76895c4462a8-kube-api-access-nslsx\") pod \"ingress-operator-5b745b69d9-j7ft6\" (UID: \"b473b5ee-cb81-4f88-b995-76895c4462a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.108151 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.108663 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.608646531 +0000 UTC m=+140.126355369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.145395 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8rphr"] Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.166729 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.173677 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.178541 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.198508 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.218300 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.218644 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.718628871 +0000 UTC m=+140.236337699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.228379 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.255707 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.319253 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.319598 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.819586814 +0000 UTC m=+140.337295642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: W1201 14:48:09.359676 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf705cd15_60f5_48ff_abc9_4a468bb32285.slice/crio-81c278ee1b1994fac8589ffe04fcfc3590877f4326df9eaa4de83741df319780 WatchSource:0}: Error finding container 81c278ee1b1994fac8589ffe04fcfc3590877f4326df9eaa4de83741df319780: Status 404 returned error can't find the container with id 81c278ee1b1994fac8589ffe04fcfc3590877f4326df9eaa4de83741df319780 Dec 01 14:48:09 crc kubenswrapper[4637]: W1201 14:48:09.395341 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb60680f_a87a_4086_b701_91f89a1d123f.slice/crio-3283de10eba341a11f31120531ee50fe92fc6d17ccadb54a6771b250702dd16a WatchSource:0}: Error finding container 3283de10eba341a11f31120531ee50fe92fc6d17ccadb54a6771b250702dd16a: Status 404 returned error can't find the container with id 3283de10eba341a11f31120531ee50fe92fc6d17ccadb54a6771b250702dd16a Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.424706 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.425101 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:09.925086676 +0000 UTC m=+140.442795504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.528581 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.528943 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.028913718 +0000 UTC m=+140.546622546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.536527 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" event={"ID":"bb60680f-a87a-4086-b701-91f89a1d123f","Type":"ContainerStarted","Data":"3283de10eba341a11f31120531ee50fe92fc6d17ccadb54a6771b250702dd16a"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.537756 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" event={"ID":"f778c361-3570-4a96-b4d1-1ba163ce04b9","Type":"ContainerStarted","Data":"afa61061464a753b82e0217201a29867318adeef4c803dc6a61174cf77313bcb"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.539426 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z2gpq" event={"ID":"cbf56a46-431a-40ef-985f-8eb89ee80d70","Type":"ContainerStarted","Data":"ff145903317a06f18d0daa26f9c30f58940c73c3a60af79c211f9f55c9209ffc"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.539445 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z2gpq" event={"ID":"cbf56a46-431a-40ef-985f-8eb89ee80d70","Type":"ContainerStarted","Data":"f272e9ba615de85d052fcd911a68a693f6c150b67f4ceb9e701cfdb0a86aea7d"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.540174 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.625703 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.625755 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.629890 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.630024 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.130009556 +0000 UTC m=+140.647718384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.630232 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.630500 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.13049287 +0000 UTC m=+140.648201698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.724661 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" event={"ID":"79026dfe-f0d3-4167-b85e-b1dc662b9548","Type":"ContainerStarted","Data":"47a0bb488bf22e4f15e14b6279e76f8f89fcbc740ea4bcdce3495cf8ed528272"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.740884 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.741693 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.241665617 +0000 UTC m=+140.759374445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.741790 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.742087 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.242081059 +0000 UTC m=+140.759789887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.764549 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9rwl6" event={"ID":"f705cd15-60f5-48ff-abc9-4a468bb32285","Type":"ContainerStarted","Data":"81c278ee1b1994fac8589ffe04fcfc3590877f4326df9eaa4de83741df319780"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.817923 4637 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kvhqq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.817980 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" podUID="b7950539-3e35-4e74-8a69-c2b3c3ba928f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.830684 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-98z2t" podStartSLOduration=120.830662494 podStartE2EDuration="2m0.830662494s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:09.817836753 +0000 UTC m=+140.335545581" watchObservedRunningTime="2025-12-01 14:48:09.830662494 +0000 UTC m=+140.348371322" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.844702 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.845341 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.345325213 +0000 UTC m=+140.863034041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848727 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zllnr" event={"ID":"c7dfebc2-dcf5-480c-81fa-534f5f0b739e","Type":"ContainerStarted","Data":"1b60da61fdc80c8664d30347ff0233031e4ee4db07e54b81241e1294e7b9a68e"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848756 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk"] Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848773 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848781 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" event={"ID":"6f680eac-8309-428b-9b5e-f5324aaf426a","Type":"ContainerStarted","Data":"fde7607b5fab0c0d5a0f1082708d3281fc7ff4de46bb6ee1d7c75d230bfc3793"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848799 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" event={"ID":"b7950539-3e35-4e74-8a69-c2b3c3ba928f","Type":"ContainerStarted","Data":"4096694e623c2ce7a7db3c7651342cea4f694680716d3e885d64dfa3cd20708f"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848810 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848820 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" event={"ID":"0ad42fe5-9e02-4d4e-849e-a03f83b4346d","Type":"ContainerStarted","Data":"a03f870ae8cbe2d2f2d4f9084a4890c5a2ee23f45ee84f6094dc0b36fec032b8"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848828 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" event={"ID":"0ad42fe5-9e02-4d4e-849e-a03f83b4346d","Type":"ContainerStarted","Data":"bf702b458b96e84446934587ef0756c0909dae4c39e5639aa425b9f0c2e8b27b"} Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848838 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8t9kz"] Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848848 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd"] Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.848856 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2"] Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.857549 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lmzmr"] Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.883714 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95"] Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.931916 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tcvq5"] Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.947371 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:09 crc kubenswrapper[4637]: E1201 14:48:09.948686 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.448670859 +0000 UTC m=+140.966379687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:09 crc kubenswrapper[4637]: I1201 14:48:09.970310 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zllnr" Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.051177 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.052041 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.552022045 +0000 UTC m=+141.069730873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.077543 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-flvbk"] Dec 01 14:48:10 crc kubenswrapper[4637]: W1201 14:48:10.118940 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7bc889e_bf57_415e_829b_9f1b91253db0.slice/crio-c72e8771621200ed7888c3b9d3ccca8f280eef18454a96aaf20dad3e920152ba WatchSource:0}: Error finding container c72e8771621200ed7888c3b9d3ccca8f280eef18454a96aaf20dad3e920152ba: Status 404 returned error can't find the container with id c72e8771621200ed7888c3b9d3ccca8f280eef18454a96aaf20dad3e920152ba Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.152867 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.153365 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.653332471 +0000 UTC m=+141.171041299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.156118 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pxf52"] Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.255786 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.260621 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.760598106 +0000 UTC m=+141.278306934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.260794 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.261070 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.76106305 +0000 UTC m=+141.278771878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.317240 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp"] Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.361987 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.362571 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.86255298 +0000 UTC m=+141.380261808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.459138 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8"] Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.464885 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.465262 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:10.965248037 +0000 UTC m=+141.482956865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.518173 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl"] Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.567726 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb"] Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.568704 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.569038 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.069021346 +0000 UTC m=+141.586730164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.672817 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.678677 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.178658776 +0000 UTC m=+141.696367604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.684732 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lc9z9"] Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.692562 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wpnfr"] Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.764686 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsmrs" podStartSLOduration=121.764672123 podStartE2EDuration="2m1.764672123s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:10.703268727 +0000 UTC m=+141.220977555" watchObservedRunningTime="2025-12-01 14:48:10.764672123 +0000 UTC m=+141.282380951" Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.771654 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-57282"] Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.778302 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.778682 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.2786664 +0000 UTC m=+141.796375228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:10 crc kubenswrapper[4637]: I1201 14:48:10.881639 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:10 crc kubenswrapper[4637]: E1201 14:48:10.882316 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.382304835 +0000 UTC m=+141.900013663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:10.977733 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" podStartSLOduration=121.97771941 podStartE2EDuration="2m1.97771941s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:10.976272966 +0000 UTC m=+141.493981794" watchObservedRunningTime="2025-12-01 14:48:10.97771941 +0000 UTC m=+141.495428238" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:10.986655 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:10.999494 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.499468735 +0000 UTC m=+142.017177563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.011085 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.077987 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" event={"ID":"f778c361-3570-4a96-b4d1-1ba163ce04b9","Type":"ContainerStarted","Data":"74fe00aa2365b32a72af7de3e162f5a6307546f50111bd6a41df7921536972da"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.078743 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.102907 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-z2gpq" podStartSLOduration=122.102878803 podStartE2EDuration="2m2.102878803s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:11.075333681 +0000 UTC m=+141.593042509" watchObservedRunningTime="2025-12-01 14:48:11.102878803 +0000 UTC m=+141.620587631" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.104055 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:11.122338 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.622320737 +0000 UTC m=+142.140029565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.160016 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" podStartSLOduration=121.159995178 podStartE2EDuration="2m1.159995178s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:11.127393981 +0000 UTC m=+141.645102809" watchObservedRunningTime="2025-12-01 14:48:11.159995178 +0000 UTC m=+141.677704006" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.171615 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zllnr" podStartSLOduration=122.171594792 podStartE2EDuration="2m2.171594792s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:11.158593285 +0000 UTC m=+141.676302113" watchObservedRunningTime="2025-12-01 14:48:11.171594792 +0000 UTC m=+141.689303620" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.181240 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" event={"ID":"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b","Type":"ContainerStarted","Data":"d353c41abf216d7364b11962c07022b389d4b719bfcf1ea995128dc44698543c"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.181286 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" event={"ID":"2d8f1b03-2eb3-4a9a-9d89-860b3efce88b","Type":"ContainerStarted","Data":"2811b15afa8afe4e2788cbbe99a6b02fb9c2d1e2af4aaf812cc540d3a1ed58ed"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.220032 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:11.221085 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.721068893 +0000 UTC m=+142.238777721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.263065 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.263536 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" event={"ID":"0ad42fe5-9e02-4d4e-849e-a03f83b4346d","Type":"ContainerStarted","Data":"d022899ba522ea3038e2416b39b5841937faf46e24ffb8ae3c4676c31f07a944"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.329294 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:11.329896 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.829884667 +0000 UTC m=+142.347593495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.378524 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" event={"ID":"6f680eac-8309-428b-9b5e-f5324aaf426a","Type":"ContainerStarted","Data":"714152150ae49ddd05baaaa6d76332f97cc89b2c32d480519c65459922c8bcea"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.383817 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" event={"ID":"b7950539-3e35-4e74-8a69-c2b3c3ba928f","Type":"ContainerStarted","Data":"54438a017834f259b9f6b91c4d23d0b07629e2b1718ba73047521709c3728880"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.419798 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nbf7h" event={"ID":"067f5196-983d-4c49-a194-68357dfb4963","Type":"ContainerStarted","Data":"5eba5885c77fea496f7341453104d1fda841600d21025d711aa38f7008c8437c"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.419852 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nbf7h" event={"ID":"067f5196-983d-4c49-a194-68357dfb4963","Type":"ContainerStarted","Data":"50fb90b9ec60000d65b0ed7291714b6d3806704ba4d8d2150c769a90e0a9a326"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.476589 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:11.477659 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:11.977620718 +0000 UTC m=+142.495329606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.506594 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5js8r"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.510047 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.520813 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" event={"ID":"8ee48f7d-1905-49a4-b201-4a20be970a40","Type":"ContainerStarted","Data":"eb4de97640568776557783089591fc0b88e5770c1fa17602dcd3ef08db02db76"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.527661 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.533310 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" event={"ID":"e7bc889e-bf57-415e-829b-9f1b91253db0","Type":"ContainerStarted","Data":"c72e8771621200ed7888c3b9d3ccca8f280eef18454a96aaf20dad3e920152ba"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.573970 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f6mt7"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.586468 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.590312 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-244ll"] Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:11.593687 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.093671083 +0000 UTC m=+142.611379991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.603138 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.608625 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" event={"ID":"79026dfe-f0d3-4167-b85e-b1dc662b9548","Type":"ContainerStarted","Data":"4f342a4206b4405ff25af99a2f85cac9cca3ecb4b3723d75f09fd6546b29771e"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.676471 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.676542 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.680455 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.681209 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" event={"ID":"78924479-be14-4e3f-88fb-0fe9f58adc11","Type":"ContainerStarted","Data":"ae0bd3264a002a2ae6293b59b39385c20f9efb4654d345130b8a264cc8803640"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.682626 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.690432 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:11.704200 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.204173848 +0000 UTC m=+142.721882676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.704562 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.712761 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt"] Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.755901 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9rwl6" event={"ID":"f705cd15-60f5-48ff-abc9-4a468bb32285","Type":"ContainerStarted","Data":"263f8b32d93103d29f3db125c3ea1d9be436920223655f40efe913c699b48fe1"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.766283 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pxf52" event={"ID":"224a5686-f4fa-4f6b-b397-9a39415f6cf0","Type":"ContainerStarted","Data":"a464b54ca935d5ec9f915268f70f904d96f341d745b23b34b61c20bcceb41811"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.787828 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" event={"ID":"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f","Type":"ContainerStarted","Data":"4c4666988e63dc1fba0aada0366f6f5108c3bde0246cb8567724cf43ff6b3501"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.789037 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" event={"ID":"02ecc491-496e-4bcf-bba9-ff48b415c10a","Type":"ContainerStarted","Data":"697643559e36d324322f34d4ea990e7cd18c835673473d3f1a510233441bf7b0"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.791739 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:11.793379 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.293366373 +0000 UTC m=+142.811075201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.796709 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" event={"ID":"33b8eb32-3572-48d9-a322-5ff99b870d99","Type":"ContainerStarted","Data":"dffd9e27b022a23cc71b86705ef673604041652bf653a49254417540b59e0bc9"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.796760 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" event={"ID":"33b8eb32-3572-48d9-a322-5ff99b870d99","Type":"ContainerStarted","Data":"0808ac6622f6b74c508bb34f073985b108e47d63fcf2a7958781632136ccc03e"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.806444 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" event={"ID":"bb60680f-a87a-4086-b701-91f89a1d123f","Type":"ContainerStarted","Data":"48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.806803 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.814138 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" event={"ID":"6a441cc0-e2d3-4572-a5f1-2ed8420bdced","Type":"ContainerStarted","Data":"8ff621b03fbe5e5aeaf076d4d82681d5ca3a64a66ebd2a7760233a7bdb51ac16"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.821175 4637 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8rphr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.821235 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" podUID="bb60680f-a87a-4086-b701-91f89a1d123f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.839916 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" event={"ID":"8720c46c-d2b9-4f4e-8ca0-379ff5e30923","Type":"ContainerStarted","Data":"19e870e9e2e7268814a647b50800ffe496194643d3ed3d0506e54770c369e7ed"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.847635 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" event={"ID":"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f","Type":"ContainerStarted","Data":"d5758f0547716bef355e4c4116a168defbeeabe52853d36068cc7db16557cc62"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.851367 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" event={"ID":"1216bb3b-7fde-42f8-b2f8-7b070dd63690","Type":"ContainerStarted","Data":"bcbf3663293a805e0cad1dd7dcbb262aa1b6afcb44cd9517bc9f584dcd79adf9"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.861000 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" event={"ID":"0b788a05-de9f-4ef6-985d-d9569f4a9860","Type":"ContainerStarted","Data":"e0c9a363f0c14244924b72a39d9c851f325e0920d3d6fe4a5c9e30c281d2c444"} Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.864139 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.864200 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.896261 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:11 crc kubenswrapper[4637]: E1201 14:48:11.897667 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.397647528 +0000 UTC m=+142.915356356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.964514 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" podStartSLOduration=121.96449762 podStartE2EDuration="2m1.96449762s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:11.962430107 +0000 UTC m=+142.480138935" watchObservedRunningTime="2025-12-01 14:48:11.96449762 +0000 UTC m=+142.482206448" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.971571 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:11 crc kubenswrapper[4637]: I1201 14:48:11.972053 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:11.995715 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.000177 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.001104 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.501091607 +0000 UTC m=+143.018800435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.069438 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bwdtf" podStartSLOduration=124.069422655 podStartE2EDuration="2m4.069422655s" podCreationTimestamp="2025-12-01 14:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.068696103 +0000 UTC m=+142.586404931" watchObservedRunningTime="2025-12-01 14:48:12.069422655 +0000 UTC m=+142.587131483" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.117709 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.118023 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.618000259 +0000 UTC m=+143.135709087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.200571 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.212674 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:12 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:12 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:12 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.213497 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.227331 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.227921 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.727901875 +0000 UTC m=+143.245610763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.232753 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" podStartSLOduration=122.232738593 podStartE2EDuration="2m2.232738593s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.23198399 +0000 UTC m=+142.749692818" watchObservedRunningTime="2025-12-01 14:48:12.232738593 +0000 UTC m=+142.750447421" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.248781 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" podStartSLOduration=122.248762252 podStartE2EDuration="2m2.248762252s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.248483614 +0000 UTC m=+142.766192442" watchObservedRunningTime="2025-12-01 14:48:12.248762252 +0000 UTC m=+142.766471100" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.266989 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nbf7h" podStartSLOduration=122.266966038 podStartE2EDuration="2m2.266966038s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.263242244 +0000 UTC m=+142.780951072" watchObservedRunningTime="2025-12-01 14:48:12.266966038 +0000 UTC m=+142.784674866" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.290659 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6mf4" podStartSLOduration=123.290626601 podStartE2EDuration="2m3.290626601s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.290158697 +0000 UTC m=+142.807867525" watchObservedRunningTime="2025-12-01 14:48:12.290626601 +0000 UTC m=+142.808335429" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.328502 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.328958 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.82892479 +0000 UTC m=+143.346633618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.393561 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j4nw2" podStartSLOduration=123.393543155 podStartE2EDuration="2m3.393543155s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.346259531 +0000 UTC m=+142.863968359" watchObservedRunningTime="2025-12-01 14:48:12.393543155 +0000 UTC m=+142.911251983" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.394180 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9rwl6" podStartSLOduration=7.394176484 podStartE2EDuration="7.394176484s" podCreationTimestamp="2025-12-01 14:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.393187313 +0000 UTC m=+142.910896141" watchObservedRunningTime="2025-12-01 14:48:12.394176484 +0000 UTC m=+142.911885312" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.429513 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.430063 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:12.9300502 +0000 UTC m=+143.447759018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.542299 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.542900 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.042873256 +0000 UTC m=+143.560582084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.644669 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.645128 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.145111128 +0000 UTC m=+143.662819956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.745609 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.746281 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.246266078 +0000 UTC m=+143.763974906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.847581 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.847872 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.347861331 +0000 UTC m=+143.865570159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.865600 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" event={"ID":"63c18ab1-c963-4bed-a0b5-7d0873eebbac","Type":"ContainerStarted","Data":"fa0de776335f7a3629764e34bac45d37ee8c0fd9c0e471bc8125ea3285a17741"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.866494 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" event={"ID":"e7bc889e-bf57-415e-829b-9f1b91253db0","Type":"ContainerStarted","Data":"8f0aabdde07f777623d9281d36b9e8147ffa1f89442b34cec5a4d87bab6bb3a8"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.868149 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lc9z9" event={"ID":"f7c1aabd-a992-44af-8ace-e8f61ef4c55f","Type":"ContainerStarted","Data":"f4916743b4f92813dad0e6844cd494d39e818a284962afede5226364e9229d11"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.869806 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" event={"ID":"6f680eac-8309-428b-9b5e-f5324aaf426a","Type":"ContainerStarted","Data":"6dc7d4b521e69f136d4bd19d68df5ce6d1be693cc589bebd52a53d76d174200f"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.871812 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" event={"ID":"6e99277b-aa2d-4f8d-a2f9-aeb954080a27","Type":"ContainerStarted","Data":"0ae5758cfe97973212295788a53d3abda0d20f0d871df2eeed7cf1293d2fc0e5"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.875448 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" event={"ID":"251bd5c6-88f1-46eb-8a76-434c8e7a1e70","Type":"ContainerStarted","Data":"4dfd0afe121f642bb7dbf3832be105849782775a8a07614309b99cede357ae78"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.876287 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" event={"ID":"1216bb3b-7fde-42f8-b2f8-7b070dd63690","Type":"ContainerStarted","Data":"052576fd8261e6f751b778aacd2e415e692b7c0e620f2782394d25c830ed4154"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.877371 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" event={"ID":"02ecc491-496e-4bcf-bba9-ff48b415c10a","Type":"ContainerStarted","Data":"513a7c422e05b5db36686efc95196e8188bd1cc7366efd03b2866580d3ce5281"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.878990 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" event={"ID":"b473b5ee-cb81-4f88-b995-76895c4462a8","Type":"ContainerStarted","Data":"3b0d6196e7a24d94bca417829f4c35802ca7e32bd67a701c3e3e68592961b1a0"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.879790 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" event={"ID":"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3","Type":"ContainerStarted","Data":"c53c091d8845320f3d7465ceca42b9a16f8ee8a92181496206b5295f6a7ca7b1"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.879813 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" event={"ID":"6a8d90d8-8039-4557-9bc7-00e63e0cb5d3","Type":"ContainerStarted","Data":"bbbbf87a1c58734d9d5051fec1b1fb08d14fbca200b96b233d2b5ad73349dd5d"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.880796 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" event={"ID":"60417f89-e1fc-48aa-ae2c-1a28adda9b65","Type":"ContainerStarted","Data":"b26dafb3bfc1ddf077621d283a0943f218e7b3b9b3675eadfcfe88d981c1fb06"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.881975 4637 generic.go:334] "Generic (PLEG): container finished" podID="0b788a05-de9f-4ef6-985d-d9569f4a9860" containerID="f80ba55d7c595bd01b56999dc317c782e58e9d70b8477ca14715e114cbaf7152" exitCode=0 Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.882013 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" event={"ID":"0b788a05-de9f-4ef6-985d-d9569f4a9860","Type":"ContainerDied","Data":"f80ba55d7c595bd01b56999dc317c782e58e9d70b8477ca14715e114cbaf7152"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.882857 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" event={"ID":"242bd093-da03-4995-9b63-e5cbc6c40650","Type":"ContainerStarted","Data":"a6b571905b054394ed71a38c1bc7a3589ed3c44489c672badf3e59fa71c4116e"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.891582 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" event={"ID":"08cd040f-3760-4400-a689-2d3806f22d94","Type":"ContainerStarted","Data":"cd460537fb6396ddb316c544e634c47907acf9532fdb17af9c5002f439e82b8d"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.893572 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" event={"ID":"cc88f839-deb8-4d34-bfb6-c8da6a9087d6","Type":"ContainerStarted","Data":"cdb3bca059edd8d41d2168618fbf1bfe3fbb1ad21be15624ff890db6831fae0a"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.895569 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" event={"ID":"23aa9f67-7343-49fc-884c-a48fef29649b","Type":"ContainerStarted","Data":"c584ae6f6bed3cdd814c3d4d31b42221338530d7229c64c0dfe0c1e83f00768b"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.896813 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" event={"ID":"78924479-be14-4e3f-88fb-0fe9f58adc11","Type":"ContainerStarted","Data":"6614678342b088dbd98c88470fb9ae301c713de94b771751647032ebd8efdcf8"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.897871 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.899325 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" event={"ID":"369bf28c-11f9-494a-8a91-a11e861d84e0","Type":"ContainerStarted","Data":"11b0bb4ab7d36a32cc28a6623de8e862beced7889b5d84dd71092b2dcf26645c"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.899347 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" event={"ID":"369bf28c-11f9-494a-8a91-a11e861d84e0","Type":"ContainerStarted","Data":"d6dce3a4de237cf65cdf28444eec928e64ef33006762219f4ac1199a597aa63e"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.906918 4637 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wvhc8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.906998 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" podUID="78924479-be14-4e3f-88fb-0fe9f58adc11" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.940203 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" event={"ID":"34c0705d-7431-485f-b393-d8d4ebd53098","Type":"ContainerStarted","Data":"97be62059db75df79b24b307aa495749839678d4efba13849c6a89b72430d885"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.942826 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" event={"ID":"03fa0315-7dd7-466c-a22b-3597d724a281","Type":"ContainerStarted","Data":"9883563e2c5883caf4c1f582f9748e65804f68747c70ac3f085afef38af29dea"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.942882 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" event={"ID":"03fa0315-7dd7-466c-a22b-3597d724a281","Type":"ContainerStarted","Data":"e880637d14ba16f711cc822a26e56b451fa778b7f6cb25a4c3ddd39230529e89"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.949770 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" event={"ID":"ff0ed963-2cbb-46cd-9f0d-33a19a29c99f","Type":"ContainerStarted","Data":"866789bb0cd24d0ed6af0217aa8865ca4e9fa23b014219279307737fc7676bf4"} Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.952850 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-tcvq5" podStartSLOduration=122.952838328 podStartE2EDuration="2m2.952838328s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.950665621 +0000 UTC m=+143.468374449" watchObservedRunningTime="2025-12-01 14:48:12.952838328 +0000 UTC m=+143.470547156" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.954193 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8t9kz" podStartSLOduration=123.954186879 podStartE2EDuration="2m3.954186879s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:12.444561393 +0000 UTC m=+142.962270221" watchObservedRunningTime="2025-12-01 14:48:12.954186879 +0000 UTC m=+143.471895707" Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.955696 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.955817 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.455789218 +0000 UTC m=+143.973498046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:12 crc kubenswrapper[4637]: I1201 14:48:12.956873 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:12 crc kubenswrapper[4637]: E1201 14:48:12.958560 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.458536062 +0000 UTC m=+143.976244890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.004575 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" event={"ID":"8ee48f7d-1905-49a4-b201-4a20be970a40","Type":"ContainerStarted","Data":"52a7c16a58b9a7dc01f72aaf1d24aeebd49172419048c9471b6caa557b492514"} Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.030552 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pxf52" event={"ID":"224a5686-f4fa-4f6b-b397-9a39415f6cf0","Type":"ContainerStarted","Data":"9a083488c41008b31e267e89aee2f203939c90c25725ecd6ad4df90491826f4c"} Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.035794 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6cwvd" event={"ID":"c77fa87b-60af-46d6-a0e1-ef83ee35ba3f","Type":"ContainerStarted","Data":"da15702bc93847dc6c4f434d9b4ae1421efdef7b48be9a4cf3693ed5937092cd"} Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.037612 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" event={"ID":"6a441cc0-e2d3-4572-a5f1-2ed8420bdced","Type":"ContainerStarted","Data":"60cb8a349a797d86d87f49a10ba4cadb8c8b76d1fb8cb55dedd857416a9bbc3f"} Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.039800 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" event={"ID":"b33b7298-4521-49ef-a7bd-5a7c94e9aed7","Type":"ContainerStarted","Data":"da8d9f3f43f6276eca93cd488a16fe5f2c5e7427fc2ad233664887d879087474"} Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.057198 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" event={"ID":"750daab9-bb17-49c4-9db0-cae26692ded5","Type":"ContainerStarted","Data":"0d3dbc0d95977a39db2d9d13998fcde4e6be90fbf94f0aa7bd0c09c44b2875e4"} Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.059126 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.059184 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.059232 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.105207 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.605183651 +0000 UTC m=+144.122892479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.112500 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mdrrl" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.112896 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.140388 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-btm95" podStartSLOduration=123.140373446 podStartE2EDuration="2m3.140373446s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:13.138153597 +0000 UTC m=+143.655862425" watchObservedRunningTime="2025-12-01 14:48:13.140373446 +0000 UTC m=+143.658082274" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.140612 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cnchp" podStartSLOduration=123.140607383 podStartE2EDuration="2m3.140607383s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:13.053307897 +0000 UTC m=+143.571016725" watchObservedRunningTime="2025-12-01 14:48:13.140607383 +0000 UTC m=+143.658316211" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.167086 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.169732 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.669719182 +0000 UTC m=+144.187428010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.193536 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" podStartSLOduration=123.193474168 podStartE2EDuration="2m3.193474168s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:13.186129464 +0000 UTC m=+143.703838302" watchObservedRunningTime="2025-12-01 14:48:13.193474168 +0000 UTC m=+143.711182996" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.215780 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:13 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:13 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:13 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.215828 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.247639 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jtkrh" podStartSLOduration=123.247624932 podStartE2EDuration="2m3.247624932s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:13.246608141 +0000 UTC m=+143.764316969" watchObservedRunningTime="2025-12-01 14:48:13.247624932 +0000 UTC m=+143.765333760" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.277117 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.277918 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.777904636 +0000 UTC m=+144.295613464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.389609 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.390415 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.890394732 +0000 UTC m=+144.408103560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.411660 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6sz5" podStartSLOduration=123.411643612 podStartE2EDuration="2m3.411643612s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:13.332728631 +0000 UTC m=+143.850437459" watchObservedRunningTime="2025-12-01 14:48:13.411643612 +0000 UTC m=+143.929352440" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.492544 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.493012 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:13.992996386 +0000 UTC m=+144.510705214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.528104 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gqjlb" podStartSLOduration=123.528075808 podStartE2EDuration="2m3.528075808s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:13.418387727 +0000 UTC m=+143.936096555" watchObservedRunningTime="2025-12-01 14:48:13.528075808 +0000 UTC m=+144.045784636" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.599814 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.600154 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.100137458 +0000 UTC m=+144.617846276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.641179 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pxf52" podStartSLOduration=8.641148261 podStartE2EDuration="8.641148261s" podCreationTimestamp="2025-12-01 14:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:13.593571808 +0000 UTC m=+144.111280636" watchObservedRunningTime="2025-12-01 14:48:13.641148261 +0000 UTC m=+144.158857089" Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.700787 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.701060 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.20104605 +0000 UTC m=+144.718754878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.801791 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.802437 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.302425507 +0000 UTC m=+144.820134325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:13 crc kubenswrapper[4637]: I1201 14:48:13.904006 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:13 crc kubenswrapper[4637]: E1201 14:48:13.905171 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.405155945 +0000 UTC m=+144.922864773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.005946 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.006583 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.506571253 +0000 UTC m=+145.024280081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.110959 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.111311 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.611297581 +0000 UTC m=+145.129006409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.209228 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:14 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:14 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:14 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.209280 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.216047 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.216331 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.716320809 +0000 UTC m=+145.234029637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.242637 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" event={"ID":"8ee48f7d-1905-49a4-b201-4a20be970a40","Type":"ContainerStarted","Data":"11d3246633689336d27e22164c17538d70fc7e53bae85eeec6c27593b18e3c95"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.302495 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c46sl" podStartSLOduration=124.302480771 podStartE2EDuration="2m4.302480771s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.301459279 +0000 UTC m=+144.819168107" watchObservedRunningTime="2025-12-01 14:48:14.302480771 +0000 UTC m=+144.820189599" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.317226 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.317572 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.817543681 +0000 UTC m=+145.335252509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.317676 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.318046 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.818039536 +0000 UTC m=+145.335748364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.326260 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" event={"ID":"242bd093-da03-4995-9b63-e5cbc6c40650","Type":"ContainerStarted","Data":"d9fd9b86e74df18ae62334e4935a691e4e15fd8b4e55fabcbc5e400252cd79db"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.330990 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" event={"ID":"08cd040f-3760-4400-a689-2d3806f22d94","Type":"ContainerStarted","Data":"efe0e99a8da695ffc6798af89dc09ae8743143aecba20e8aaf748a03bfce1e5c"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.331266 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.335111 4637 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hj7gt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.335160 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" podUID="08cd040f-3760-4400-a689-2d3806f22d94" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.347254 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" event={"ID":"6e99277b-aa2d-4f8d-a2f9-aeb954080a27","Type":"ContainerStarted","Data":"df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.348358 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.354964 4637 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-244ll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.355137 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.359775 4637 generic.go:334] "Generic (PLEG): container finished" podID="6a441cc0-e2d3-4572-a5f1-2ed8420bdced" containerID="60cb8a349a797d86d87f49a10ba4cadb8c8b76d1fb8cb55dedd857416a9bbc3f" exitCode=0 Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.359837 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" event={"ID":"6a441cc0-e2d3-4572-a5f1-2ed8420bdced","Type":"ContainerDied","Data":"60cb8a349a797d86d87f49a10ba4cadb8c8b76d1fb8cb55dedd857416a9bbc3f"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.361100 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" event={"ID":"34c0705d-7431-485f-b393-d8d4ebd53098","Type":"ContainerStarted","Data":"96714400fac1892d4faa6ad94ee3817a0431d2cb33c8a357f3ebf9493e465fde"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.362919 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" event={"ID":"251bd5c6-88f1-46eb-8a76-434c8e7a1e70","Type":"ContainerStarted","Data":"3a2ff7f0cd922548e868fdc91f7c3964802ca860ef52df09d07873d1a5cf1e2d"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.379022 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" event={"ID":"750daab9-bb17-49c4-9db0-cae26692ded5","Type":"ContainerStarted","Data":"1499eac70b9dd616f76ed4bea370c07f95bafd888102d8a31c3c76d8a8f44e35"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.384997 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" podStartSLOduration=124.384981991 podStartE2EDuration="2m4.384981991s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.384507706 +0000 UTC m=+144.902216534" watchObservedRunningTime="2025-12-01 14:48:14.384981991 +0000 UTC m=+144.902690819" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.401350 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" event={"ID":"0b788a05-de9f-4ef6-985d-d9569f4a9860","Type":"ContainerStarted","Data":"7c2495b84b4306eadf960739604a093217b2f9b35fc7c64f8f17d3acc32c3add"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.401518 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.406549 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" podStartSLOduration=125.406534449 podStartE2EDuration="2m5.406534449s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.404798026 +0000 UTC m=+144.922506854" watchObservedRunningTime="2025-12-01 14:48:14.406534449 +0000 UTC m=+144.924243277" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.407840 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" event={"ID":"cc88f839-deb8-4d34-bfb6-c8da6a9087d6","Type":"ContainerStarted","Data":"e7c8b1fd85e00f2f32e7e4240b1360b6479e30d9c54be4d4712cf8c7a49e9a56"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.420461 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.421302 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:14.921269609 +0000 UTC m=+145.438978437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.426498 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" event={"ID":"03fa0315-7dd7-466c-a22b-3597d724a281","Type":"ContainerStarted","Data":"35664570f695855746963d63f392f35e273ed67a957c96d2d2b788cdfd4e6578"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.454754 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lc9z9" event={"ID":"f7c1aabd-a992-44af-8ace-e8f61ef4c55f","Type":"ContainerStarted","Data":"c72f60c9d0dd3f4ebb036bb32398690015331e9a6c7cc70ad5f5025da05592fd"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.459774 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" event={"ID":"b473b5ee-cb81-4f88-b995-76895c4462a8","Type":"ContainerStarted","Data":"31b5d3066a0da4ccec22500dc1f14b7be76f090e7f17cd30a59fc3c359aebe23"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.472052 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" podStartSLOduration=124.47203708 podStartE2EDuration="2m4.47203708s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.47074437 +0000 UTC m=+144.988453198" watchObservedRunningTime="2025-12-01 14:48:14.47203708 +0000 UTC m=+144.989745908" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.508847 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" event={"ID":"63c18ab1-c963-4bed-a0b5-7d0873eebbac","Type":"ContainerStarted","Data":"e77eb7de62d8c33ee47d2188b871ad76d59322709c8f44ae88da5c6def2a4871"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.516415 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-57282" podStartSLOduration=124.516398435 podStartE2EDuration="2m4.516398435s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.514568098 +0000 UTC m=+145.032276926" watchObservedRunningTime="2025-12-01 14:48:14.516398435 +0000 UTC m=+145.034107263" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.523385 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.526989 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.026971827 +0000 UTC m=+145.544680735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.536227 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" event={"ID":"60417f89-e1fc-48aa-ae2c-1a28adda9b65","Type":"ContainerStarted","Data":"5d1693836dc0dee66d7b16c69ee35955931529889db367d52f470c306eec947e"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.536539 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" podStartSLOduration=125.536524609 podStartE2EDuration="2m5.536524609s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.535836749 +0000 UTC m=+145.053545577" watchObservedRunningTime="2025-12-01 14:48:14.536524609 +0000 UTC m=+145.054233437" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.551658 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" event={"ID":"23aa9f67-7343-49fc-884c-a48fef29649b","Type":"ContainerStarted","Data":"80bdc608392a22b25fdc5f84b3153d7f0db649a681e03b457120a03389536790"} Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.552221 4637 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wvhc8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.552258 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" podUID="78924479-be14-4e3f-88fb-0fe9f58adc11" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.613486 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wpnfr" podStartSLOduration=124.613464289 podStartE2EDuration="2m4.613464289s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.577865372 +0000 UTC m=+145.095574210" watchObservedRunningTime="2025-12-01 14:48:14.613464289 +0000 UTC m=+145.131173117" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.631383 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.632350 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.132301814 +0000 UTC m=+145.650010642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.689379 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ncnx" podStartSLOduration=124.689354577 podStartE2EDuration="2m4.689354577s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.67504222 +0000 UTC m=+145.192751048" watchObservedRunningTime="2025-12-01 14:48:14.689354577 +0000 UTC m=+145.207063405" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.750886 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.752536 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.252525557 +0000 UTC m=+145.770234385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.769317 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" podStartSLOduration=124.769296699 podStartE2EDuration="2m4.769296699s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.767383331 +0000 UTC m=+145.285092159" watchObservedRunningTime="2025-12-01 14:48:14.769296699 +0000 UTC m=+145.287005527" Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.851693 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.852250 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.352236243 +0000 UTC m=+145.869945071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:14 crc kubenswrapper[4637]: I1201 14:48:14.953154 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:14 crc kubenswrapper[4637]: E1201 14:48:14.953453 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.453441014 +0000 UTC m=+145.971149842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.055155 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.055996 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.555902803 +0000 UTC m=+146.073611631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.159644 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.160008 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.659995052 +0000 UTC m=+146.177703880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.205260 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:15 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:15 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:15 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.205324 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.261014 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.261561 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.761546724 +0000 UTC m=+146.279255552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.363100 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.363462 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.863450357 +0000 UTC m=+146.381159185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.463844 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.464264 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:15.964248676 +0000 UTC m=+146.481957504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.565283 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.565635 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.065624162 +0000 UTC m=+146.583332990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.598963 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" event={"ID":"34c0705d-7431-485f-b393-d8d4ebd53098","Type":"ContainerStarted","Data":"0bd77800ab34bcc84472fdf47bce5e263e6feaafd2d0077d201bad010c44c536"} Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.599029 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.604029 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" event={"ID":"750daab9-bb17-49c4-9db0-cae26692ded5","Type":"ContainerStarted","Data":"339d0530f895575fdbf998b090f52fc33a46ca2dee8a529bd57c150910de5805"} Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.605822 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lc9z9" event={"ID":"f7c1aabd-a992-44af-8ace-e8f61ef4c55f","Type":"ContainerStarted","Data":"e192ed62af06279ad129764c492a75785d9c27e5b16281cd8dd39e0db0ffb70e"} Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.605865 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.610837 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" event={"ID":"b473b5ee-cb81-4f88-b995-76895c4462a8","Type":"ContainerStarted","Data":"72992041d81baa1622d633bf1db2ff9598af1721fe86dd11b7efc1698f811fa4"} Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.612226 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" event={"ID":"242bd093-da03-4995-9b63-e5cbc6c40650","Type":"ContainerStarted","Data":"1178ec73957c9271624e90bea93440fd9541067eb0c580e7f44e41f5174c3b0a"} Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.613131 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.613229 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.614203 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" event={"ID":"1216bb3b-7fde-42f8-b2f8-7b070dd63690","Type":"ContainerStarted","Data":"b8f0727a51cec6921bfcc1a9276c22c2491d4bd037907ce12da845245a6be0c9"} Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.616175 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" event={"ID":"6a441cc0-e2d3-4572-a5f1-2ed8420bdced","Type":"ContainerStarted","Data":"00e5b1036fc62cb7c36a8e9ca57e17b0fccfe9187a8dee896977b347d8864731"} Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.618098 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" event={"ID":"b33b7298-4521-49ef-a7bd-5a7c94e9aed7","Type":"ContainerStarted","Data":"0522fe57312be2d72ae497fa3dfd483ecd0a4690d2b6f252d24746bbf9ace169"} Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.618567 4637 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hj7gt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.618605 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" podUID="08cd040f-3760-4400-a689-2d3806f22d94" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.618676 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.618965 4637 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-244ll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.619005 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.620237 4637 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-dhnkh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.620365 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" podUID="23aa9f67-7343-49fc-884c-a48fef29649b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.668645 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.668834 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.168808654 +0000 UTC m=+146.686517482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.669161 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.669478 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.169467334 +0000 UTC m=+146.687176162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.686004 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" podStartSLOduration=125.685988918 podStartE2EDuration="2m5.685988918s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:15.684509343 +0000 UTC m=+146.202218171" watchObservedRunningTime="2025-12-01 14:48:15.685988918 +0000 UTC m=+146.203697746" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.686514 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-f4jrv" podStartSLOduration=125.686511164 podStartE2EDuration="2m5.686511164s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:14.841521765 +0000 UTC m=+145.359230593" watchObservedRunningTime="2025-12-01 14:48:15.686511164 +0000 UTC m=+146.204219992" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.718024 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx6wb" podStartSLOduration=125.718009797 podStartE2EDuration="2m5.718009797s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:15.708891008 +0000 UTC m=+146.226599826" watchObservedRunningTime="2025-12-01 14:48:15.718009797 +0000 UTC m=+146.235718625" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.770285 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.770443 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.270421367 +0000 UTC m=+146.788130205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.770847 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.773899 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.273885853 +0000 UTC m=+146.791594681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.802746 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lc9z9" podStartSLOduration=10.802729574 podStartE2EDuration="10.802729574s" podCreationTimestamp="2025-12-01 14:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:15.769049055 +0000 UTC m=+146.286757883" watchObservedRunningTime="2025-12-01 14:48:15.802729574 +0000 UTC m=+146.320438402" Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.872575 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.872901 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.372887688 +0000 UTC m=+146.890596516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:15 crc kubenswrapper[4637]: I1201 14:48:15.974051 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:15 crc kubenswrapper[4637]: E1201 14:48:15.974460 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.474444309 +0000 UTC m=+146.992153127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.075274 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.075570 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.575541887 +0000 UTC m=+147.093250715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.075943 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.076438 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.576426474 +0000 UTC m=+147.094135302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.176603 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.177155 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.67713921 +0000 UTC m=+147.194848038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.234065 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:16 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:16 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:16 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.234409 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.278842 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.279339 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.779322941 +0000 UTC m=+147.297031759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.380653 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.381588 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.881571304 +0000 UTC m=+147.399280132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.395650 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" podStartSLOduration=127.395637474 podStartE2EDuration="2m7.395637474s" podCreationTimestamp="2025-12-01 14:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:16.272987877 +0000 UTC m=+146.790696705" watchObservedRunningTime="2025-12-01 14:48:16.395637474 +0000 UTC m=+146.913346302" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.483179 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.483454 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:16.983442436 +0000 UTC m=+147.501151264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.493194 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-flvbk" podStartSLOduration=126.493178834 podStartE2EDuration="2m6.493178834s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:16.398032907 +0000 UTC m=+146.915741735" watchObservedRunningTime="2025-12-01 14:48:16.493178834 +0000 UTC m=+147.010887662" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.494203 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5js8r" podStartSLOduration=126.494197154 podStartE2EDuration="2m6.494197154s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:16.491858773 +0000 UTC m=+147.009567601" watchObservedRunningTime="2025-12-01 14:48:16.494197154 +0000 UTC m=+147.011905982" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.558379 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j7ft6" podStartSLOduration=126.558364384 podStartE2EDuration="2m6.558364384s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:16.552259658 +0000 UTC m=+147.069968486" watchObservedRunningTime="2025-12-01 14:48:16.558364384 +0000 UTC m=+147.076073212" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.584009 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.584183 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.584225 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.584253 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.584316 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.585540 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.592191 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.092163436 +0000 UTC m=+147.609872264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.604655 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.606522 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.613003 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.624699 4637 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wvhc8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.624759 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" podUID="78924479-be14-4e3f-88fb-0fe9f58adc11" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.664630 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" event={"ID":"6a441cc0-e2d3-4572-a5f1-2ed8420bdced","Type":"ContainerStarted","Data":"e8fcad500a18ea0f4fab60915ab8bed12708a9cd8c4b2a881af3302b7cc9d8fa"} Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.691219 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.692150 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.692693 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.192679116 +0000 UTC m=+147.710387944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.746056 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.746288 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.754278 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" event={"ID":"b33b7298-4521-49ef-a7bd-5a7c94e9aed7","Type":"ContainerStarted","Data":"6e25d0b8acc9533a94dabcc588bd267e6fd71efdb7c2d2e8691fe8a5d9219c83"} Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.755339 4637 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-dhnkh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.755373 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" podUID="23aa9f67-7343-49fc-884c-a48fef29649b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.755385 4637 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-244ll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.755420 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.794439 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.795209 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.295189937 +0000 UTC m=+147.812898765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.895640 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.897927 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.397911605 +0000 UTC m=+147.915620433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.984046 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.984103 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.998136 4637 patch_prober.go:28] interesting pod/console-f9d7485db-98z2t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.998195 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-98z2t" podUID="6462925c-d528-4dd6-a6e1-55563db83168" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.998465 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.998587 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.498568799 +0000 UTC m=+148.016277627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:16 crc kubenswrapper[4637]: I1201 14:48:16.998805 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:16 crc kubenswrapper[4637]: E1201 14:48:16.999110 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.499099106 +0000 UTC m=+148.016807934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.103582 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.103783 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.603756893 +0000 UTC m=+148.121465721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.103832 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.104168 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.604154915 +0000 UTC m=+148.121863743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.208466 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.208801 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.708784621 +0000 UTC m=+148.226493449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.219710 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:17 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:17 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:17 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.219774 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.325373 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.325654 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.82564343 +0000 UTC m=+148.343352258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.427213 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.427345 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.927313815 +0000 UTC m=+148.445022633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.427738 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.428023 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:17.928013966 +0000 UTC m=+148.445722794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.528289 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.528589 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.028573188 +0000 UTC m=+148.546282016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.677008 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.677343 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.177331181 +0000 UTC m=+148.695039999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.767057 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" event={"ID":"b33b7298-4521-49ef-a7bd-5a7c94e9aed7","Type":"ContainerStarted","Data":"db0f9db9e7f907c29e9f3a1c274b81c548fd9ffe88051bb768d1f4fee5902a99"} Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.780960 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.781266 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.281249185 +0000 UTC m=+148.798958013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.886666 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.887180 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.387168851 +0000 UTC m=+148.904877679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.992542 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.992638 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.492617291 +0000 UTC m=+149.010326109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:17 crc kubenswrapper[4637]: I1201 14:48:17.993095 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:17 crc kubenswrapper[4637]: E1201 14:48:17.993431 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.493419836 +0000 UTC m=+149.011128664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.101460 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.101610 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.601586 +0000 UTC m=+149.119294828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.101657 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.102005 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.601993013 +0000 UTC m=+149.119701841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.189812 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.189863 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.189873 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.189920 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.226090 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:18 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:18 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:18 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.226142 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.226342 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.226598 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.726583848 +0000 UTC m=+149.244292666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.274041 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.274096 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.332269 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.332552 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.832541254 +0000 UTC m=+149.350250082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.433673 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.433964 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:18.933948872 +0000 UTC m=+149.451657700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.519000 4637 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-z4qtk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.519041 4637 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-z4qtk container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.519065 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" podUID="0b788a05-de9f-4ef6-985d-d9569f4a9860" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.519109 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" podUID="0b788a05-de9f-4ef6-985d-d9569f4a9860" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.537202 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.537551 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:19.037536706 +0000 UTC m=+149.555245534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.637879 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.638061 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:19.138035815 +0000 UTC m=+149.655744643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.638183 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.638488 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:19.138480419 +0000 UTC m=+149.656189247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.661296 4637 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.739468 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.740138 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:19.240123243 +0000 UTC m=+149.757832071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.840973 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.841290 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:19.341278384 +0000 UTC m=+149.858987212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.841452 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ce83818ca0f57de74e52ac7714e94d3a213bfc19809a1deaf9d1d51619840a73"} Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.844655 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" event={"ID":"b33b7298-4521-49ef-a7bd-5a7c94e9aed7","Type":"ContainerStarted","Data":"f85655a302e7e4f49f401661f68e49b4da229d8bd1dbe963f28946c740d85251"} Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.948195 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:18 crc kubenswrapper[4637]: E1201 14:48:18.949265 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:19.449249661 +0000 UTC m=+149.966958489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:18 crc kubenswrapper[4637]: I1201 14:48:18.994076 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wvhc8" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.041194 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj7gt" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.049480 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:19 crc kubenswrapper[4637]: E1201 14:48:19.049981 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 14:48:19.549964927 +0000 UTC m=+150.067673755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8b86l" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.145265 4637 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-01T14:48:18.661324527Z","Handler":null,"Name":""} Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.150512 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:19 crc kubenswrapper[4637]: E1201 14:48:19.151218 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 14:48:19.65120227 +0000 UTC m=+150.168911098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.169107 4637 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.169142 4637 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.195426 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhnkh" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.200622 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.203053 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:19 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:19 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:19 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.203319 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.229623 4637 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-244ll container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.229671 4637 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-244ll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.229692 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.229720 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.252415 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:19 crc kubenswrapper[4637]: W1201 14:48:19.293231 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f32c63ebaeae679c366d2893ee7ec0fa1a4ca385599ff36f295d756d7c8addc9 WatchSource:0}: Error finding container f32c63ebaeae679c366d2893ee7ec0fa1a4ca385599ff36f295d756d7c8addc9: Status 404 returned error can't find the container with id f32c63ebaeae679c366d2893ee7ec0fa1a4ca385599ff36f295d756d7c8addc9 Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.295823 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-f6mt7" podStartSLOduration=14.295808586 podStartE2EDuration="14.295808586s" podCreationTimestamp="2025-12-01 14:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:19.163623189 +0000 UTC m=+149.681332007" watchObservedRunningTime="2025-12-01 14:48:19.295808586 +0000 UTC m=+149.813517414" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.466753 4637 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.466788 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.599682 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2k4sx"] Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.600607 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.607284 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.635501 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2k4sx"] Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.761172 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdm7\" (UniqueName: \"kubernetes.io/projected/3b76fb53-14a1-49f9-b120-a4b492ab70fc-kube-api-access-gxdm7\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.761291 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-catalog-content\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.761364 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-utilities\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.774362 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bfht5"] Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.775615 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.790511 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.863293 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdm7\" (UniqueName: \"kubernetes.io/projected/3b76fb53-14a1-49f9-b120-a4b492ab70fc-kube-api-access-gxdm7\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.863704 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-catalog-content\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.863846 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-utilities\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.864253 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-catalog-content\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.864441 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-utilities\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.865083 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f32c63ebaeae679c366d2893ee7ec0fa1a4ca385599ff36f295d756d7c8addc9"} Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.883447 4637 generic.go:334] "Generic (PLEG): container finished" podID="251bd5c6-88f1-46eb-8a76-434c8e7a1e70" containerID="3a2ff7f0cd922548e868fdc91f7c3964802ca860ef52df09d07873d1a5cf1e2d" exitCode=0 Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.883522 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" event={"ID":"251bd5c6-88f1-46eb-8a76-434c8e7a1e70","Type":"ContainerDied","Data":"3a2ff7f0cd922548e868fdc91f7c3964802ca860ef52df09d07873d1a5cf1e2d"} Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.886094 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f2566e1a4fd2f3e1065da55bee6ad0de81511f2aa66ab13e15078f9d5fadd180"} Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.888862 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"923894b19fb457dea9e3123fe4459de5b57d0c2b15211ac18a1f1c72bc8770d7"} Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.928334 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8b86l\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.979188 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.979395 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-catalog-content\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.979428 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2gn\" (UniqueName: \"kubernetes.io/projected/90a8e718-fe2f-4e8f-acc6-bb25efde0385-kube-api-access-ws2gn\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:19 crc kubenswrapper[4637]: I1201 14:48:19.979516 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-utilities\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:19.990280 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdm7\" (UniqueName: \"kubernetes.io/projected/3b76fb53-14a1-49f9-b120-a4b492ab70fc-kube-api-access-gxdm7\") pod \"certified-operators-2k4sx\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.008978 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-79qxf"] Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.010873 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.030620 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfht5"] Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.054991 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-79qxf"] Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.087383 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-catalog-content\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.087416 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2gn\" (UniqueName: \"kubernetes.io/projected/90a8e718-fe2f-4e8f-acc6-bb25efde0385-kube-api-access-ws2gn\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.087439 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-utilities\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.087469 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-utilities\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.087491 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkx2\" (UniqueName: \"kubernetes.io/projected/f427d1c1-5d40-4473-8295-4f271899dd13-kube-api-access-fgkx2\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.087508 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-catalog-content\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.088231 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-catalog-content\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.088452 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-utilities\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.184916 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws2gn\" (UniqueName: \"kubernetes.io/projected/90a8e718-fe2f-4e8f-acc6-bb25efde0385-kube-api-access-ws2gn\") pod \"community-operators-bfht5\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.190031 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgkx2\" (UniqueName: \"kubernetes.io/projected/f427d1c1-5d40-4473-8295-4f271899dd13-kube-api-access-fgkx2\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.190073 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-catalog-content\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.190138 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-utilities\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.190525 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-utilities\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.190981 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-catalog-content\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.203483 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:20 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:20 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:20 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.203528 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.231614 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.250491 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9m7pd"] Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.251323 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.260644 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.290891 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cghlr\" (UniqueName: \"kubernetes.io/projected/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-kube-api-access-cghlr\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.290974 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-utilities\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.290992 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-catalog-content\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.314338 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9m7pd"] Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.318551 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgkx2\" (UniqueName: \"kubernetes.io/projected/f427d1c1-5d40-4473-8295-4f271899dd13-kube-api-access-fgkx2\") pod \"certified-operators-79qxf\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.322862 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.344877 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.391994 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cghlr\" (UniqueName: \"kubernetes.io/projected/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-kube-api-access-cghlr\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.392073 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-utilities\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.392099 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-catalog-content\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.392617 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-utilities\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.392694 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-catalog-content\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.431384 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.478256 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cghlr\" (UniqueName: \"kubernetes.io/projected/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-kube-api-access-cghlr\") pod \"community-operators-9m7pd\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.564256 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.565010 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.565726 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.608218 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.608429 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.608699 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.608741 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.635035 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.723574 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.723650 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.723759 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.857247 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4qtk" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.904451 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.940724 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:20 crc kubenswrapper[4637]: I1201 14:48:20.952899 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"66ae68e42017b6868da068d3a940ad7a8217f2325db753633241e9edd3dc3ec7"} Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.250252 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:21 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:21 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:21 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.250609 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.750180 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8wk6j"] Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.751667 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.772220 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.857659 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.858288 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wk6j"] Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.899341 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-catalog-content\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.899405 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vg6\" (UniqueName: \"kubernetes.io/projected/85895600-b021-44c3-ac07-f6ccd4f40226-kube-api-access-z8vg6\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:21 crc kubenswrapper[4637]: I1201 14:48:21.899470 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-utilities\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:21.998307 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"33e7072d1c9135309a8de648dac0c1d3162306dd06b18ff462d27ff5f0fa435a"} Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:21.998340 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.001063 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-catalog-content\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.001096 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8vg6\" (UniqueName: \"kubernetes.io/projected/85895600-b021-44c3-ac07-f6ccd4f40226-kube-api-access-z8vg6\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.001129 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-utilities\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.001580 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-utilities\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.001828 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-catalog-content\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.072743 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.073411 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.080346 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.080748 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.090754 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8vg6\" (UniqueName: \"kubernetes.io/projected/85895600-b021-44c3-ac07-f6ccd4f40226-kube-api-access-z8vg6\") pod \"redhat-marketplace-8wk6j\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.094489 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.101862 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.101899 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.125058 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9m7pd"] Dec 01 14:48:22 crc kubenswrapper[4637]: W1201 14:48:22.145335 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa7bf6cd_51f3_4b0f_8fcf_e1e4fa487673.slice/crio-89e81b40c856ccde9b73c27be3fcaa8e38dd0a9685027a02cf45a0583e8e4dd8 WatchSource:0}: Error finding container 89e81b40c856ccde9b73c27be3fcaa8e38dd0a9685027a02cf45a0583e8e4dd8: Status 404 returned error can't find the container with id 89e81b40c856ccde9b73c27be3fcaa8e38dd0a9685027a02cf45a0583e8e4dd8 Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.183089 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4kmbq"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.199818 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.202909 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.203833 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.203996 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.213287 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kmbq"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.216721 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.222266 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:22 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:22 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:22 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.222327 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.232147 4637 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lmzmr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]log ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]etcd ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/max-in-flight-filter ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 01 14:48:22 crc kubenswrapper[4637]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 01 14:48:22 crc kubenswrapper[4637]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/project.openshift.io-projectcache ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/openshift.io-startinformers ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 01 14:48:22 crc kubenswrapper[4637]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 14:48:22 crc kubenswrapper[4637]: livez check failed Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.232232 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" podUID="6a441cc0-e2d3-4572-a5f1-2ed8420bdced" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.323196 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.418755 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-utilities\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.418845 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-catalog-content\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.418874 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfkn\" (UniqueName: \"kubernetes.io/projected/cf829738-1178-4c69-add1-22239dd6b4c9-kube-api-access-8tfkn\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.419084 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.436192 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8b86l"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.453080 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.453504 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2k4sx"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.529263 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2492\" (UniqueName: \"kubernetes.io/projected/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-kube-api-access-n2492\") pod \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.529401 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-secret-volume\") pod \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.529431 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-config-volume\") pod \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\" (UID: \"251bd5c6-88f1-46eb-8a76-434c8e7a1e70\") " Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.529585 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-utilities\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.529637 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-catalog-content\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.529656 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tfkn\" (UniqueName: \"kubernetes.io/projected/cf829738-1178-4c69-add1-22239dd6b4c9-kube-api-access-8tfkn\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.530873 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-config-volume" (OuterVolumeSpecName: "config-volume") pod "251bd5c6-88f1-46eb-8a76-434c8e7a1e70" (UID: "251bd5c6-88f1-46eb-8a76-434c8e7a1e70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.534654 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-utilities\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.535096 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-catalog-content\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.536231 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-kube-api-access-n2492" (OuterVolumeSpecName: "kube-api-access-n2492") pod "251bd5c6-88f1-46eb-8a76-434c8e7a1e70" (UID: "251bd5c6-88f1-46eb-8a76-434c8e7a1e70"). InnerVolumeSpecName "kube-api-access-n2492". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.541939 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "251bd5c6-88f1-46eb-8a76-434c8e7a1e70" (UID: "251bd5c6-88f1-46eb-8a76-434c8e7a1e70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.561721 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfht5"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.574878 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tfkn\" (UniqueName: \"kubernetes.io/projected/cf829738-1178-4c69-add1-22239dd6b4c9-kube-api-access-8tfkn\") pod \"redhat-marketplace-4kmbq\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.630853 4637 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.630881 4637 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.630891 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2492\" (UniqueName: \"kubernetes.io/projected/251bd5c6-88f1-46eb-8a76-434c8e7a1e70-kube-api-access-n2492\") on node \"crc\" DevicePath \"\"" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.688806 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.728732 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wk6j"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.744136 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l2zct"] Dec 01 14:48:22 crc kubenswrapper[4637]: E1201 14:48:22.744324 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251bd5c6-88f1-46eb-8a76-434c8e7a1e70" containerName="collect-profiles" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.744339 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="251bd5c6-88f1-46eb-8a76-434c8e7a1e70" containerName="collect-profiles" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.744452 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="251bd5c6-88f1-46eb-8a76-434c8e7a1e70" containerName="collect-profiles" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.745250 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.748640 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.754839 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2zct"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.759786 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-79qxf"] Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.844828 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.934558 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-catalog-content\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.934609 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg8bp\" (UniqueName: \"kubernetes.io/projected/8023e8dc-e1e9-48d5-b1de-005d6e38e174-kube-api-access-xg8bp\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:22 crc kubenswrapper[4637]: I1201 14:48:22.934635 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-utilities\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:22 crc kubenswrapper[4637]: W1201 14:48:22.940594 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b76fb53_14a1_49f9_b120_a4b492ab70fc.slice/crio-3569cbc6675d10ace0780e4d5e38e0fd02331ec95674addf40c36e39709f91d3 WatchSource:0}: Error finding container 3569cbc6675d10ace0780e4d5e38e0fd02331ec95674addf40c36e39709f91d3: Status 404 returned error can't find the container with id 3569cbc6675d10ace0780e4d5e38e0fd02331ec95674addf40c36e39709f91d3 Dec 01 14:48:22 crc kubenswrapper[4637]: W1201 14:48:22.942077 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90a8e718_fe2f_4e8f_acc6_bb25efde0385.slice/crio-0b65539f72dc08a21c6e8f068a0e7166d1e65664aedf96dc7ea245dcd4ba7b28 WatchSource:0}: Error finding container 0b65539f72dc08a21c6e8f068a0e7166d1e65664aedf96dc7ea245dcd4ba7b28: Status 404 returned error can't find the container with id 0b65539f72dc08a21c6e8f068a0e7166d1e65664aedf96dc7ea245dcd4ba7b28 Dec 01 14:48:22 crc kubenswrapper[4637]: W1201 14:48:22.944388 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8e26fa2c_46e9_4c37_bc8b_7c23ac3c128d.slice/crio-83512282c03beac306ca709cfd96ab4c1624273b1bac7a98965311e1d1ac31c3 WatchSource:0}: Error finding container 83512282c03beac306ca709cfd96ab4c1624273b1bac7a98965311e1d1ac31c3: Status 404 returned error can't find the container with id 83512282c03beac306ca709cfd96ab4c1624273b1bac7a98965311e1d1ac31c3 Dec 01 14:48:22 crc kubenswrapper[4637]: W1201 14:48:22.945682 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85895600_b021_44c3_ac07_f6ccd4f40226.slice/crio-2579a8e908f04d04ac286dc28e30c16d3a60fd0e8ce974c663db05503f1cddc3 WatchSource:0}: Error finding container 2579a8e908f04d04ac286dc28e30c16d3a60fd0e8ce974c663db05503f1cddc3: Status 404 returned error can't find the container with id 2579a8e908f04d04ac286dc28e30c16d3a60fd0e8ce974c663db05503f1cddc3 Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.022140 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfht5" event={"ID":"90a8e718-fe2f-4e8f-acc6-bb25efde0385","Type":"ContainerStarted","Data":"0b65539f72dc08a21c6e8f068a0e7166d1e65664aedf96dc7ea245dcd4ba7b28"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.039045 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" event={"ID":"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b","Type":"ContainerStarted","Data":"714729baab0eb8b0a9f9ac7e202d564a7697410dc9c3caadcb4e34cbbefdfc03"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.040081 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k4sx" event={"ID":"3b76fb53-14a1-49f9-b120-a4b492ab70fc","Type":"ContainerStarted","Data":"3569cbc6675d10ace0780e4d5e38e0fd02331ec95674addf40c36e39709f91d3"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.044285 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-catalog-content\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.044340 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg8bp\" (UniqueName: \"kubernetes.io/projected/8023e8dc-e1e9-48d5-b1de-005d6e38e174-kube-api-access-xg8bp\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.044359 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-utilities\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.045010 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-utilities\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.045263 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-catalog-content\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.062024 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79qxf" event={"ID":"f427d1c1-5d40-4473-8295-4f271899dd13","Type":"ContainerStarted","Data":"c19abc36efb4b2fe06e5b5feba2adf4d06bc47a7a1075b45e344c24e5c41447d"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.063477 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wk6j" event={"ID":"85895600-b021-44c3-ac07-f6ccd4f40226","Type":"ContainerStarted","Data":"2579a8e908f04d04ac286dc28e30c16d3a60fd0e8ce974c663db05503f1cddc3"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.081815 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg8bp\" (UniqueName: \"kubernetes.io/projected/8023e8dc-e1e9-48d5-b1de-005d6e38e174-kube-api-access-xg8bp\") pod \"redhat-operators-l2zct\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.126549 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.126633 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr" event={"ID":"251bd5c6-88f1-46eb-8a76-434c8e7a1e70","Type":"ContainerDied","Data":"4dfd0afe121f642bb7dbf3832be105849782775a8a07614309b99cede357ae78"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.126777 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dfd0afe121f642bb7dbf3832be105849782775a8a07614309b99cede357ae78" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.146397 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d","Type":"ContainerStarted","Data":"83512282c03beac306ca709cfd96ab4c1624273b1bac7a98965311e1d1ac31c3"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.156612 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9m7pd" event={"ID":"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673","Type":"ContainerStarted","Data":"a789c7eaa84d9bd95ca86e125ad2c86691874c03a84df4a8c0b3e58ce6996cda"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.156659 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9m7pd" event={"ID":"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673","Type":"ContainerStarted","Data":"89e81b40c856ccde9b73c27be3fcaa8e38dd0a9685027a02cf45a0583e8e4dd8"} Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.162740 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.168864 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6nhdr"] Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.180904 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.206487 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:23 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:23 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:23 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.206534 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.211086 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nhdr"] Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.291618 4637 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lmzmr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]log ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]etcd ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/max-in-flight-filter ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 01 14:48:23 crc kubenswrapper[4637]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/project.openshift.io-projectcache ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/openshift.io-startinformers ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 01 14:48:23 crc kubenswrapper[4637]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 14:48:23 crc kubenswrapper[4637]: livez check failed Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.291670 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" podUID="6a441cc0-e2d3-4572-a5f1-2ed8420bdced" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.355876 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-catalog-content\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.357638 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-utilities\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.357682 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9cfx\" (UniqueName: \"kubernetes.io/projected/f9bf0fdb-f832-4c97-a1e4-74aace880d56-kube-api-access-l9cfx\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.361676 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.455879 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.461793 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-utilities\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.461860 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9cfx\" (UniqueName: \"kubernetes.io/projected/f9bf0fdb-f832-4c97-a1e4-74aace880d56-kube-api-access-l9cfx\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.461923 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-catalog-content\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.462444 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-catalog-content\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.463114 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-utilities\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: W1201 14:48:23.473121 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9eda0139_3bd8_4d55_8f58_c9ee8e366637.slice/crio-9af825349ef58c82ee22ff966b6e0802276a3b1148c2fcb8b8c065f4123c1d38 WatchSource:0}: Error finding container 9af825349ef58c82ee22ff966b6e0802276a3b1148c2fcb8b8c065f4123c1d38: Status 404 returned error can't find the container with id 9af825349ef58c82ee22ff966b6e0802276a3b1148c2fcb8b8c065f4123c1d38 Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.490114 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9cfx\" (UniqueName: \"kubernetes.io/projected/f9bf0fdb-f832-4c97-a1e4-74aace880d56-kube-api-access-l9cfx\") pod \"redhat-operators-6nhdr\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:23 crc kubenswrapper[4637]: I1201 14:48:23.656612 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.088043 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kmbq"] Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.098304 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lc9z9" Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.143778 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2zct"] Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.200271 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zct" event={"ID":"8023e8dc-e1e9-48d5-b1de-005d6e38e174","Type":"ContainerStarted","Data":"6d3865fbc3abd53b8dfe6067b9a5a7ff3324ca6f0c821bd9ec5fde29e070c8c1"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.209925 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:24 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:24 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:24 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.209995 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.225419 4637 generic.go:334] "Generic (PLEG): container finished" podID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerID="a789c7eaa84d9bd95ca86e125ad2c86691874c03a84df4a8c0b3e58ce6996cda" exitCode=0 Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.225582 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9m7pd" event={"ID":"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673","Type":"ContainerDied","Data":"a789c7eaa84d9bd95ca86e125ad2c86691874c03a84df4a8c0b3e58ce6996cda"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.243170 4637 generic.go:334] "Generic (PLEG): container finished" podID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerID="a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c" exitCode=0 Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.243434 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k4sx" event={"ID":"3b76fb53-14a1-49f9-b120-a4b492ab70fc","Type":"ContainerDied","Data":"a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.265265 4637 generic.go:334] "Generic (PLEG): container finished" podID="f427d1c1-5d40-4473-8295-4f271899dd13" containerID="3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4" exitCode=0 Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.265431 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79qxf" event={"ID":"f427d1c1-5d40-4473-8295-4f271899dd13","Type":"ContainerDied","Data":"3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.289099 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kmbq" event={"ID":"cf829738-1178-4c69-add1-22239dd6b4c9","Type":"ContainerStarted","Data":"ace25cb229f206c69fae103b01f4bbf9ce1d22a3165706f8c8c888949d433f13"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.307046 4637 generic.go:334] "Generic (PLEG): container finished" podID="85895600-b021-44c3-ac07-f6ccd4f40226" containerID="ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4" exitCode=0 Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.307150 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wk6j" event={"ID":"85895600-b021-44c3-ac07-f6ccd4f40226","Type":"ContainerDied","Data":"ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.350117 4637 generic.go:334] "Generic (PLEG): container finished" podID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerID="87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51" exitCode=0 Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.350168 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfht5" event={"ID":"90a8e718-fe2f-4e8f-acc6-bb25efde0385","Type":"ContainerDied","Data":"87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.365959 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" event={"ID":"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b","Type":"ContainerStarted","Data":"32a82240aab19455b47b27b284858d8e4f8e8a85c7fcc445c5474de613799d1b"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.366702 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.391921 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9eda0139-3bd8-4d55-8f58-c9ee8e366637","Type":"ContainerStarted","Data":"9af825349ef58c82ee22ff966b6e0802276a3b1148c2fcb8b8c065f4123c1d38"} Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.439516 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.439496475 podStartE2EDuration="4.439496475s" podCreationTimestamp="2025-12-01 14:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:24.408074005 +0000 UTC m=+154.925782833" watchObservedRunningTime="2025-12-01 14:48:24.439496475 +0000 UTC m=+154.957205303" Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.466877 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" podStartSLOduration=134.46685901 podStartE2EDuration="2m14.46685901s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:24.439032561 +0000 UTC m=+154.956741389" watchObservedRunningTime="2025-12-01 14:48:24.46685901 +0000 UTC m=+154.984567838" Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.686538 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.68651425 podStartE2EDuration="2.68651425s" podCreationTimestamp="2025-12-01 14:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:24.4727193 +0000 UTC m=+154.990428128" watchObservedRunningTime="2025-12-01 14:48:24.68651425 +0000 UTC m=+155.204223078" Dec 01 14:48:24 crc kubenswrapper[4637]: I1201 14:48:24.689541 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nhdr"] Dec 01 14:48:24 crc kubenswrapper[4637]: W1201 14:48:24.756544 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9bf0fdb_f832_4c97_a1e4_74aace880d56.slice/crio-438a54379878c397d41234bf6ba6c50fbc365bd0014d935665c200f992fe5439 WatchSource:0}: Error finding container 438a54379878c397d41234bf6ba6c50fbc365bd0014d935665c200f992fe5439: Status 404 returned error can't find the container with id 438a54379878c397d41234bf6ba6c50fbc365bd0014d935665c200f992fe5439 Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.207988 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:25 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:25 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:25 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.208263 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.435790 4637 generic.go:334] "Generic (PLEG): container finished" podID="8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d" containerID="d11a93815c4ccca1906f475597ea2763bb8e8909b15ecb8a816942d7f8914740" exitCode=0 Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.435857 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d","Type":"ContainerDied","Data":"d11a93815c4ccca1906f475597ea2763bb8e8909b15ecb8a816942d7f8914740"} Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.446259 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9eda0139-3bd8-4d55-8f58-c9ee8e366637","Type":"ContainerStarted","Data":"ab64b308a7547be654a027c55aa01ee3a4c9a8a11e828dec0f4e8c5b6de88b6b"} Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.466182 4637 generic.go:334] "Generic (PLEG): container finished" podID="cf829738-1178-4c69-add1-22239dd6b4c9" containerID="97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b" exitCode=0 Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.466342 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kmbq" event={"ID":"cf829738-1178-4c69-add1-22239dd6b4c9","Type":"ContainerDied","Data":"97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b"} Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.491309 4637 generic.go:334] "Generic (PLEG): container finished" podID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerID="bd8c13bd93caf013b68bdf414589025d3e8487f43e6e998aea6648c0340a7e42" exitCode=0 Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.491412 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zct" event={"ID":"8023e8dc-e1e9-48d5-b1de-005d6e38e174","Type":"ContainerDied","Data":"bd8c13bd93caf013b68bdf414589025d3e8487f43e6e998aea6648c0340a7e42"} Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.498256 4637 generic.go:334] "Generic (PLEG): container finished" podID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerID="ae5259ad1e87c79d7bec94a5d49a2dc1803410958965516de7f1feb40859a762" exitCode=0 Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.499710 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nhdr" event={"ID":"f9bf0fdb-f832-4c97-a1e4-74aace880d56","Type":"ContainerDied","Data":"ae5259ad1e87c79d7bec94a5d49a2dc1803410958965516de7f1feb40859a762"} Dec 01 14:48:25 crc kubenswrapper[4637]: I1201 14:48:25.499755 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nhdr" event={"ID":"f9bf0fdb-f832-4c97-a1e4-74aace880d56","Type":"ContainerStarted","Data":"438a54379878c397d41234bf6ba6c50fbc365bd0014d935665c200f992fe5439"} Dec 01 14:48:26 crc kubenswrapper[4637]: I1201 14:48:26.204649 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:26 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:26 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:26 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:26 crc kubenswrapper[4637]: I1201 14:48:26.204704 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:26 crc kubenswrapper[4637]: I1201 14:48:26.655754 4637 generic.go:334] "Generic (PLEG): container finished" podID="9eda0139-3bd8-4d55-8f58-c9ee8e366637" containerID="ab64b308a7547be654a027c55aa01ee3a4c9a8a11e828dec0f4e8c5b6de88b6b" exitCode=0 Dec 01 14:48:26 crc kubenswrapper[4637]: I1201 14:48:26.655906 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9eda0139-3bd8-4d55-8f58-c9ee8e366637","Type":"ContainerDied","Data":"ab64b308a7547be654a027c55aa01ee3a4c9a8a11e828dec0f4e8c5b6de88b6b"} Dec 01 14:48:26 crc kubenswrapper[4637]: I1201 14:48:26.987091 4637 patch_prober.go:28] interesting pod/console-f9d7485db-98z2t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Dec 01 14:48:26 crc kubenswrapper[4637]: I1201 14:48:26.987541 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-98z2t" podUID="6462925c-d528-4dd6-a6e1-55563db83168" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Dec 01 14:48:27 crc kubenswrapper[4637]: I1201 14:48:27.258230 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:27 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:27 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:27 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:27 crc kubenswrapper[4637]: I1201 14:48:27.258295 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.097677 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.167119 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.167172 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.175813 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.175858 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.195298 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kubelet-dir\") pod \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\" (UID: \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\") " Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.195365 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kube-api-access\") pod \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\" (UID: \"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d\") " Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.199397 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d" (UID: "8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.202693 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:28 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:28 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:28 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.202741 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.234499 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d" (UID: "8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.276044 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.286749 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lmzmr" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.297702 4637 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.297726 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.794315 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.794307 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d","Type":"ContainerDied","Data":"83512282c03beac306ca709cfd96ab4c1624273b1bac7a98965311e1d1ac31c3"} Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.794685 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83512282c03beac306ca709cfd96ab4c1624273b1bac7a98965311e1d1ac31c3" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.794971 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.909919 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kubelet-dir\") pod \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\" (UID: \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\") " Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.909982 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kube-api-access\") pod \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\" (UID: \"9eda0139-3bd8-4d55-8f58-c9ee8e366637\") " Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.910049 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9eda0139-3bd8-4d55-8f58-c9ee8e366637" (UID: "9eda0139-3bd8-4d55-8f58-c9ee8e366637"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.910542 4637 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:48:28 crc kubenswrapper[4637]: I1201 14:48:28.943416 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9eda0139-3bd8-4d55-8f58-c9ee8e366637" (UID: "9eda0139-3bd8-4d55-8f58-c9ee8e366637"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:48:29 crc kubenswrapper[4637]: I1201 14:48:29.012679 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eda0139-3bd8-4d55-8f58-c9ee8e366637-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 14:48:29 crc kubenswrapper[4637]: I1201 14:48:29.234939 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:29 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:29 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:29 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:29 crc kubenswrapper[4637]: I1201 14:48:29.234995 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:29 crc kubenswrapper[4637]: I1201 14:48:29.268396 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:48:29 crc kubenswrapper[4637]: I1201 14:48:29.843374 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:48:29 crc kubenswrapper[4637]: I1201 14:48:29.878044 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9eda0139-3bd8-4d55-8f58-c9ee8e366637","Type":"ContainerDied","Data":"9af825349ef58c82ee22ff966b6e0802276a3b1148c2fcb8b8c065f4123c1d38"} Dec 01 14:48:29 crc kubenswrapper[4637]: I1201 14:48:29.878077 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af825349ef58c82ee22ff966b6e0802276a3b1148c2fcb8b8c065f4123c1d38" Dec 01 14:48:29 crc kubenswrapper[4637]: I1201 14:48:29.878152 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 14:48:30 crc kubenswrapper[4637]: I1201 14:48:30.206389 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:30 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:30 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:30 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:30 crc kubenswrapper[4637]: I1201 14:48:30.206708 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:31 crc kubenswrapper[4637]: I1201 14:48:31.201491 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:31 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:31 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:31 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:31 crc kubenswrapper[4637]: I1201 14:48:31.201578 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:31 crc kubenswrapper[4637]: I1201 14:48:31.955104 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:48:31 crc kubenswrapper[4637]: I1201 14:48:31.980089 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/435e8f74-9c96-4508-b6a6-a1a2280f8176-metrics-certs\") pod \"network-metrics-daemon-7w2l8\" (UID: \"435e8f74-9c96-4508-b6a6-a1a2280f8176\") " pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:48:32 crc kubenswrapper[4637]: I1201 14:48:32.016946 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7w2l8" Dec 01 14:48:32 crc kubenswrapper[4637]: I1201 14:48:32.202523 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:32 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:32 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:32 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:32 crc kubenswrapper[4637]: I1201 14:48:32.202570 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:32 crc kubenswrapper[4637]: I1201 14:48:32.972968 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7w2l8"] Dec 01 14:48:33 crc kubenswrapper[4637]: W1201 14:48:33.007497 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod435e8f74_9c96_4508_b6a6_a1a2280f8176.slice/crio-9f26406db0f328de6396ecfe2ba0ee4135f4f68f02a4ced4577ce675eb7cd694 WatchSource:0}: Error finding container 9f26406db0f328de6396ecfe2ba0ee4135f4f68f02a4ced4577ce675eb7cd694: Status 404 returned error can't find the container with id 9f26406db0f328de6396ecfe2ba0ee4135f4f68f02a4ced4577ce675eb7cd694 Dec 01 14:48:33 crc kubenswrapper[4637]: I1201 14:48:33.206811 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:33 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:33 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:33 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:33 crc kubenswrapper[4637]: I1201 14:48:33.207676 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:34 crc kubenswrapper[4637]: I1201 14:48:34.007962 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" event={"ID":"435e8f74-9c96-4508-b6a6-a1a2280f8176","Type":"ContainerStarted","Data":"9f26406db0f328de6396ecfe2ba0ee4135f4f68f02a4ced4577ce675eb7cd694"} Dec 01 14:48:34 crc kubenswrapper[4637]: I1201 14:48:34.201828 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:34 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:34 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:34 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:34 crc kubenswrapper[4637]: I1201 14:48:34.201878 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:35 crc kubenswrapper[4637]: I1201 14:48:35.021310 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" event={"ID":"435e8f74-9c96-4508-b6a6-a1a2280f8176","Type":"ContainerStarted","Data":"709cc420ebc0cde0766255445a0556d5184ee192614fae85083133aee63c4439"} Dec 01 14:48:35 crc kubenswrapper[4637]: I1201 14:48:35.207571 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:35 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:35 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:35 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:35 crc kubenswrapper[4637]: I1201 14:48:35.207665 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:36 crc kubenswrapper[4637]: I1201 14:48:36.072246 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7w2l8" event={"ID":"435e8f74-9c96-4508-b6a6-a1a2280f8176","Type":"ContainerStarted","Data":"894d22ab533ce6efbef38b3174b46850ea3a73560dc0253556fe2e8f98a420b3"} Dec 01 14:48:36 crc kubenswrapper[4637]: I1201 14:48:36.101089 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7w2l8" podStartSLOduration=146.101068025 podStartE2EDuration="2m26.101068025s" podCreationTimestamp="2025-12-01 14:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:48:36.094215796 +0000 UTC m=+166.611924624" watchObservedRunningTime="2025-12-01 14:48:36.101068025 +0000 UTC m=+166.618776853" Dec 01 14:48:36 crc kubenswrapper[4637]: I1201 14:48:36.202515 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:36 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:36 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:36 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:36 crc kubenswrapper[4637]: I1201 14:48:36.202622 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:36 crc kubenswrapper[4637]: I1201 14:48:36.986149 4637 patch_prober.go:28] interesting pod/console-f9d7485db-98z2t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Dec 01 14:48:36 crc kubenswrapper[4637]: I1201 14:48:36.986200 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-98z2t" podUID="6462925c-d528-4dd6-a6e1-55563db83168" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Dec 01 14:48:37 crc kubenswrapper[4637]: I1201 14:48:37.206297 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:37 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:37 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:37 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:37 crc kubenswrapper[4637]: I1201 14:48:37.206347 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.178296 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.178606 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.178650 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.178315 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.178961 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.179288 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.179306 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.179324 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ff145903317a06f18d0daa26f9c30f58940c73c3a60af79c211f9f55c9209ffc"} pod="openshift-console/downloads-7954f5f757-z2gpq" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.179436 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" containerID="cri-o://ff145903317a06f18d0daa26f9c30f58940c73c3a60af79c211f9f55c9209ffc" gracePeriod=2 Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.202198 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:38 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:38 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:38 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:38 crc kubenswrapper[4637]: I1201 14:48:38.202257 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:39 crc kubenswrapper[4637]: I1201 14:48:39.140144 4637 generic.go:334] "Generic (PLEG): container finished" podID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerID="ff145903317a06f18d0daa26f9c30f58940c73c3a60af79c211f9f55c9209ffc" exitCode=0 Dec 01 14:48:39 crc kubenswrapper[4637]: I1201 14:48:39.140187 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z2gpq" event={"ID":"cbf56a46-431a-40ef-985f-8eb89ee80d70","Type":"ContainerDied","Data":"ff145903317a06f18d0daa26f9c30f58940c73c3a60af79c211f9f55c9209ffc"} Dec 01 14:48:39 crc kubenswrapper[4637]: I1201 14:48:39.205527 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:39 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:39 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:39 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:39 crc kubenswrapper[4637]: I1201 14:48:39.205590 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:40 crc kubenswrapper[4637]: I1201 14:48:40.201706 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:40 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:40 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:40 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:40 crc kubenswrapper[4637]: I1201 14:48:40.201772 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:40 crc kubenswrapper[4637]: I1201 14:48:40.240493 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:48:41 crc kubenswrapper[4637]: I1201 14:48:41.204060 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:41 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:41 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:41 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:41 crc kubenswrapper[4637]: I1201 14:48:41.204133 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:42 crc kubenswrapper[4637]: I1201 14:48:42.202410 4637 patch_prober.go:28] interesting pod/router-default-5444994796-nbf7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 14:48:42 crc kubenswrapper[4637]: [-]has-synced failed: reason withheld Dec 01 14:48:42 crc kubenswrapper[4637]: [+]process-running ok Dec 01 14:48:42 crc kubenswrapper[4637]: healthz check failed Dec 01 14:48:42 crc kubenswrapper[4637]: I1201 14:48:42.202716 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbf7h" podUID="067f5196-983d-4c49-a194-68357dfb4963" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:48:43 crc kubenswrapper[4637]: I1201 14:48:43.210296 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:43 crc kubenswrapper[4637]: I1201 14:48:43.217608 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nbf7h" Dec 01 14:48:45 crc kubenswrapper[4637]: I1201 14:48:45.613784 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:48:45 crc kubenswrapper[4637]: I1201 14:48:45.614264 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:48:46 crc kubenswrapper[4637]: I1201 14:48:46.988750 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:46 crc kubenswrapper[4637]: I1201 14:48:46.997410 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:48:48 crc kubenswrapper[4637]: I1201 14:48:48.167108 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:48 crc kubenswrapper[4637]: I1201 14:48:48.167215 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:48 crc kubenswrapper[4637]: I1201 14:48:48.982413 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rdmp7" Dec 01 14:48:56 crc kubenswrapper[4637]: I1201 14:48:56.696659 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.170908 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.171003 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.444317 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 14:48:58 crc kubenswrapper[4637]: E1201 14:48:58.444729 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d" containerName="pruner" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.444750 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d" containerName="pruner" Dec 01 14:48:58 crc kubenswrapper[4637]: E1201 14:48:58.444772 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eda0139-3bd8-4d55-8f58-c9ee8e366637" containerName="pruner" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.444784 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eda0139-3bd8-4d55-8f58-c9ee8e366637" containerName="pruner" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.444923 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eda0139-3bd8-4d55-8f58-c9ee8e366637" containerName="pruner" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.444953 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e26fa2c-46e9-4c37-bc8b-7c23ac3c128d" containerName="pruner" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.445525 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.447507 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.448459 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.448540 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.584595 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.584667 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.685496 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.685561 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.685662 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.705762 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:48:58 crc kubenswrapper[4637]: I1201 14:48:58.785891 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.632750 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.637035 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.640741 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.737222 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.737276 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73b64284-bacb-455e-9e41-7c7f7deeacfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.737466 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-var-lock\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.859268 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-var-lock\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.859758 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.859783 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73b64284-bacb-455e-9e41-7c7f7deeacfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.859511 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-var-lock\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.859873 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.886375 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73b64284-bacb-455e-9e41-7c7f7deeacfc-kube-api-access\") pod \"installer-9-crc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:03 crc kubenswrapper[4637]: I1201 14:49:03.963687 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:07 crc kubenswrapper[4637]: E1201 14:49:06.998781 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 01 14:49:07 crc kubenswrapper[4637]: E1201 14:49:07.004062 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xg8bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l2zct_openshift-marketplace(8023e8dc-e1e9-48d5-b1de-005d6e38e174): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 14:49:07 crc kubenswrapper[4637]: E1201 14:49:07.005833 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l2zct" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" Dec 01 14:49:07 crc kubenswrapper[4637]: E1201 14:49:07.011799 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 01 14:49:07 crc kubenswrapper[4637]: E1201 14:49:07.012849 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9cfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6nhdr_openshift-marketplace(f9bf0fdb-f832-4c97-a1e4-74aace880d56): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 14:49:07 crc kubenswrapper[4637]: E1201 14:49:07.014150 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6nhdr" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" Dec 01 14:49:08 crc kubenswrapper[4637]: I1201 14:49:08.165810 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:49:08 crc kubenswrapper[4637]: I1201 14:49:08.166292 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:49:08 crc kubenswrapper[4637]: E1201 14:49:08.464739 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l2zct" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" Dec 01 14:49:08 crc kubenswrapper[4637]: E1201 14:49:08.464820 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6nhdr" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" Dec 01 14:49:08 crc kubenswrapper[4637]: E1201 14:49:08.539339 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 01 14:49:08 crc kubenswrapper[4637]: E1201 14:49:08.539497 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8vg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8wk6j_openshift-marketplace(85895600-b021-44c3-ac07-f6ccd4f40226): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 14:49:08 crc kubenswrapper[4637]: E1201 14:49:08.540873 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8wk6j" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" Dec 01 14:49:10 crc kubenswrapper[4637]: I1201 14:49:10.345031 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvhqq"] Dec 01 14:49:11 crc kubenswrapper[4637]: E1201 14:49:11.439098 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8wk6j" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" Dec 01 14:49:11 crc kubenswrapper[4637]: E1201 14:49:11.503748 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 01 14:49:11 crc kubenswrapper[4637]: E1201 14:49:11.503919 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws2gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bfht5_openshift-marketplace(90a8e718-fe2f-4e8f-acc6-bb25efde0385): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 14:49:11 crc kubenswrapper[4637]: E1201 14:49:11.505837 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bfht5" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" Dec 01 14:49:11 crc kubenswrapper[4637]: E1201 14:49:11.520569 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 01 14:49:11 crc kubenswrapper[4637]: E1201 14:49:11.520816 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cghlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9m7pd_openshift-marketplace(fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 14:49:11 crc kubenswrapper[4637]: E1201 14:49:11.522034 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9m7pd" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.573392 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bfht5" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.573444 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9m7pd" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.643747 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.644307 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxdm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2k4sx_openshift-marketplace(3b76fb53-14a1-49f9-b120-a4b492ab70fc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.648085 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2k4sx" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.699422 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.699670 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgkx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-79qxf_openshift-marketplace(f427d1c1-5d40-4473-8295-4f271899dd13): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.701215 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-79qxf" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.719592 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.719768 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tfkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4kmbq_openshift-marketplace(cf829738-1178-4c69-add1-22239dd6b4c9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 14:49:13 crc kubenswrapper[4637]: E1201 14:49:13.721602 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4kmbq" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" Dec 01 14:49:13 crc kubenswrapper[4637]: I1201 14:49:13.988610 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 14:49:14 crc kubenswrapper[4637]: I1201 14:49:14.124091 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 14:49:14 crc kubenswrapper[4637]: I1201 14:49:14.570821 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z2gpq" event={"ID":"cbf56a46-431a-40ef-985f-8eb89ee80d70","Type":"ContainerStarted","Data":"6557f5447d1cacf9057aa0f9d014690822013010c72663a8f885e9174e400665"} Dec 01 14:49:14 crc kubenswrapper[4637]: I1201 14:49:14.571462 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:49:14 crc kubenswrapper[4637]: I1201 14:49:14.571387 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:49:14 crc kubenswrapper[4637]: I1201 14:49:14.571610 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:49:14 crc kubenswrapper[4637]: I1201 14:49:14.572862 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73b64284-bacb-455e-9e41-7c7f7deeacfc","Type":"ContainerStarted","Data":"495ffa786ae47344261c20f4327d9a99cb824fb442131313c453ffe8929e2bf7"} Dec 01 14:49:14 crc kubenswrapper[4637]: I1201 14:49:14.576420 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6c007f6a-915a-4fb6-baf9-ff2a1c537315","Type":"ContainerStarted","Data":"bb3968941ef4cb252ecb40e31d3a97cc56edfdc55f7ef8088419c7a97b55b110"} Dec 01 14:49:14 crc kubenswrapper[4637]: E1201 14:49:14.578741 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4kmbq" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" Dec 01 14:49:14 crc kubenswrapper[4637]: E1201 14:49:14.581986 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2k4sx" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" Dec 01 14:49:14 crc kubenswrapper[4637]: E1201 14:49:14.582545 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-79qxf" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.583196 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6c007f6a-915a-4fb6-baf9-ff2a1c537315","Type":"ContainerStarted","Data":"70ee2714fec454375252136696713c4c7ff1dcffe63308bc098b7523a5ac1da3"} Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.584806 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73b64284-bacb-455e-9e41-7c7f7deeacfc","Type":"ContainerStarted","Data":"3eb93426d47e9c9b6dc3db18c89b0131a7635fb926510b58dbf07de10c98b562"} Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.585354 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.585404 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.613457 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.613512 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.613546 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.614374 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.614523 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8" gracePeriod=600 Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.615103 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=17.615076742 podStartE2EDuration="17.615076742s" podCreationTimestamp="2025-12-01 14:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:49:15.598599623 +0000 UTC m=+206.116308471" watchObservedRunningTime="2025-12-01 14:49:15.615076742 +0000 UTC m=+206.132785570" Dec 01 14:49:15 crc kubenswrapper[4637]: I1201 14:49:15.616631 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.616623541 podStartE2EDuration="12.616623541s" podCreationTimestamp="2025-12-01 14:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:49:15.613838052 +0000 UTC m=+206.131546880" watchObservedRunningTime="2025-12-01 14:49:15.616623541 +0000 UTC m=+206.134332369" Dec 01 14:49:16 crc kubenswrapper[4637]: I1201 14:49:16.592419 4637 generic.go:334] "Generic (PLEG): container finished" podID="6c007f6a-915a-4fb6-baf9-ff2a1c537315" containerID="70ee2714fec454375252136696713c4c7ff1dcffe63308bc098b7523a5ac1da3" exitCode=0 Dec 01 14:49:16 crc kubenswrapper[4637]: I1201 14:49:16.593470 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6c007f6a-915a-4fb6-baf9-ff2a1c537315","Type":"ContainerDied","Data":"70ee2714fec454375252136696713c4c7ff1dcffe63308bc098b7523a5ac1da3"} Dec 01 14:49:16 crc kubenswrapper[4637]: I1201 14:49:16.595772 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8" exitCode=0 Dec 01 14:49:16 crc kubenswrapper[4637]: I1201 14:49:16.596679 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8"} Dec 01 14:49:16 crc kubenswrapper[4637]: I1201 14:49:16.596713 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"4885cfa0f150a63ffa8391a1ed4c896a43e4cdd3b372dc06af2d7e94293fae9c"} Dec 01 14:49:17 crc kubenswrapper[4637]: I1201 14:49:17.909121 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.026123 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kubelet-dir\") pod \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\" (UID: \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\") " Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.026226 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6c007f6a-915a-4fb6-baf9-ff2a1c537315" (UID: "6c007f6a-915a-4fb6-baf9-ff2a1c537315"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.026446 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kube-api-access\") pod \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\" (UID: \"6c007f6a-915a-4fb6-baf9-ff2a1c537315\") " Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.026850 4637 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.035209 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6c007f6a-915a-4fb6-baf9-ff2a1c537315" (UID: "6c007f6a-915a-4fb6-baf9-ff2a1c537315"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.128847 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c007f6a-915a-4fb6-baf9-ff2a1c537315-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.166644 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.166745 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.167325 4637 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gpq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.167414 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gpq" podUID="cbf56a46-431a-40ef-985f-8eb89ee80d70" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.607735 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6c007f6a-915a-4fb6-baf9-ff2a1c537315","Type":"ContainerDied","Data":"bb3968941ef4cb252ecb40e31d3a97cc56edfdc55f7ef8088419c7a97b55b110"} Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.608231 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb3968941ef4cb252ecb40e31d3a97cc56edfdc55f7ef8088419c7a97b55b110" Dec 01 14:49:18 crc kubenswrapper[4637]: I1201 14:49:18.607792 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 14:49:25 crc kubenswrapper[4637]: I1201 14:49:25.648704 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nhdr" event={"ID":"f9bf0fdb-f832-4c97-a1e4-74aace880d56","Type":"ContainerStarted","Data":"0f0fedbac598256694fedf8e82d761c1967f7598456bcc80bb8d0dbf56856c6f"} Dec 01 14:49:26 crc kubenswrapper[4637]: I1201 14:49:26.660325 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wk6j" event={"ID":"85895600-b021-44c3-ac07-f6ccd4f40226","Type":"ContainerStarted","Data":"254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10"} Dec 01 14:49:27 crc kubenswrapper[4637]: I1201 14:49:27.683034 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zct" event={"ID":"8023e8dc-e1e9-48d5-b1de-005d6e38e174","Type":"ContainerStarted","Data":"d119a5b001756a026683ecebc2394b8bbe3267f3bbf5fd8e3aa92ba228bd5e59"} Dec 01 14:49:28 crc kubenswrapper[4637]: I1201 14:49:28.172367 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-z2gpq" Dec 01 14:49:29 crc kubenswrapper[4637]: I1201 14:49:29.702410 4637 generic.go:334] "Generic (PLEG): container finished" podID="85895600-b021-44c3-ac07-f6ccd4f40226" containerID="254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10" exitCode=0 Dec 01 14:49:29 crc kubenswrapper[4637]: I1201 14:49:29.702469 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wk6j" event={"ID":"85895600-b021-44c3-ac07-f6ccd4f40226","Type":"ContainerDied","Data":"254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10"} Dec 01 14:49:29 crc kubenswrapper[4637]: I1201 14:49:29.712414 4637 generic.go:334] "Generic (PLEG): container finished" podID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerID="d119a5b001756a026683ecebc2394b8bbe3267f3bbf5fd8e3aa92ba228bd5e59" exitCode=0 Dec 01 14:49:29 crc kubenswrapper[4637]: I1201 14:49:29.712486 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zct" event={"ID":"8023e8dc-e1e9-48d5-b1de-005d6e38e174","Type":"ContainerDied","Data":"d119a5b001756a026683ecebc2394b8bbe3267f3bbf5fd8e3aa92ba228bd5e59"} Dec 01 14:49:29 crc kubenswrapper[4637]: I1201 14:49:29.726227 4637 generic.go:334] "Generic (PLEG): container finished" podID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerID="0f0fedbac598256694fedf8e82d761c1967f7598456bcc80bb8d0dbf56856c6f" exitCode=0 Dec 01 14:49:29 crc kubenswrapper[4637]: I1201 14:49:29.726465 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nhdr" event={"ID":"f9bf0fdb-f832-4c97-a1e4-74aace880d56","Type":"ContainerDied","Data":"0f0fedbac598256694fedf8e82d761c1967f7598456bcc80bb8d0dbf56856c6f"} Dec 01 14:49:30 crc kubenswrapper[4637]: I1201 14:49:30.735103 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9m7pd" event={"ID":"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673","Type":"ContainerStarted","Data":"5722a373cbd3feaa4e37234c9ccb02e9636d013cbed1f83dd64bc5ba2052c2c8"} Dec 01 14:49:30 crc kubenswrapper[4637]: I1201 14:49:30.742612 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfht5" event={"ID":"90a8e718-fe2f-4e8f-acc6-bb25efde0385","Type":"ContainerStarted","Data":"c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59"} Dec 01 14:49:30 crc kubenswrapper[4637]: I1201 14:49:30.744830 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79qxf" event={"ID":"f427d1c1-5d40-4473-8295-4f271899dd13","Type":"ContainerStarted","Data":"5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928"} Dec 01 14:49:31 crc kubenswrapper[4637]: I1201 14:49:31.771693 4637 generic.go:334] "Generic (PLEG): container finished" podID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerID="c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59" exitCode=0 Dec 01 14:49:31 crc kubenswrapper[4637]: I1201 14:49:31.779719 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfht5" event={"ID":"90a8e718-fe2f-4e8f-acc6-bb25efde0385","Type":"ContainerDied","Data":"c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59"} Dec 01 14:49:32 crc kubenswrapper[4637]: I1201 14:49:32.779047 4637 generic.go:334] "Generic (PLEG): container finished" podID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerID="5722a373cbd3feaa4e37234c9ccb02e9636d013cbed1f83dd64bc5ba2052c2c8" exitCode=0 Dec 01 14:49:32 crc kubenswrapper[4637]: I1201 14:49:32.779113 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9m7pd" event={"ID":"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673","Type":"ContainerDied","Data":"5722a373cbd3feaa4e37234c9ccb02e9636d013cbed1f83dd64bc5ba2052c2c8"} Dec 01 14:49:32 crc kubenswrapper[4637]: I1201 14:49:32.782588 4637 generic.go:334] "Generic (PLEG): container finished" podID="f427d1c1-5d40-4473-8295-4f271899dd13" containerID="5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928" exitCode=0 Dec 01 14:49:32 crc kubenswrapper[4637]: I1201 14:49:32.782639 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79qxf" event={"ID":"f427d1c1-5d40-4473-8295-4f271899dd13","Type":"ContainerDied","Data":"5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928"} Dec 01 14:49:35 crc kubenswrapper[4637]: I1201 14:49:35.397357 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" podUID="b7950539-3e35-4e74-8a69-c2b3c3ba928f" containerName="oauth-openshift" containerID="cri-o://54438a017834f259b9f6b91c4d23d0b07629e2b1718ba73047521709c3728880" gracePeriod=15 Dec 01 14:49:35 crc kubenswrapper[4637]: I1201 14:49:35.799330 4637 generic.go:334] "Generic (PLEG): container finished" podID="b7950539-3e35-4e74-8a69-c2b3c3ba928f" containerID="54438a017834f259b9f6b91c4d23d0b07629e2b1718ba73047521709c3728880" exitCode=0 Dec 01 14:49:35 crc kubenswrapper[4637]: I1201 14:49:35.799393 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" event={"ID":"b7950539-3e35-4e74-8a69-c2b3c3ba928f","Type":"ContainerDied","Data":"54438a017834f259b9f6b91c4d23d0b07629e2b1718ba73047521709c3728880"} Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.380656 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.419095 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w"] Dec 01 14:49:37 crc kubenswrapper[4637]: E1201 14:49:37.419350 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7950539-3e35-4e74-8a69-c2b3c3ba928f" containerName="oauth-openshift" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.419372 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7950539-3e35-4e74-8a69-c2b3c3ba928f" containerName="oauth-openshift" Dec 01 14:49:37 crc kubenswrapper[4637]: E1201 14:49:37.419401 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c007f6a-915a-4fb6-baf9-ff2a1c537315" containerName="pruner" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.419410 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c007f6a-915a-4fb6-baf9-ff2a1c537315" containerName="pruner" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.419549 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7950539-3e35-4e74-8a69-c2b3c3ba928f" containerName="oauth-openshift" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.419598 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c007f6a-915a-4fb6-baf9-ff2a1c537315" containerName="pruner" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.420071 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.435867 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w"] Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.457865 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-login\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.457911 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4gzn\" (UniqueName: \"kubernetes.io/projected/b7950539-3e35-4e74-8a69-c2b3c3ba928f-kube-api-access-t4gzn\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.457984 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-provider-selection\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458006 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-idp-0-file-data\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458046 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-service-ca\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458076 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-router-certs\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458104 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-ocp-branding-template\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458158 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-serving-cert\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458178 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-policies\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458226 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-session\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458263 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-trusted-ca-bundle\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458295 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-dir\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458333 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-error\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.458365 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-cliconfig\") pod \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\" (UID: \"b7950539-3e35-4e74-8a69-c2b3c3ba928f\") " Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.459053 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.459800 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.460012 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.460179 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.460650 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.469606 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.470284 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560379 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560439 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-error\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560463 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlfk\" (UniqueName: \"kubernetes.io/projected/e288e57c-c4dc-4134-b964-3e6d4469989f-kube-api-access-bjlfk\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560575 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-login\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560634 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560659 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-session\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560746 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-audit-policies\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560777 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560872 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.560922 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561214 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561333 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e288e57c-c4dc-4134-b964-3e6d4469989f-audit-dir\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561368 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561393 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561528 4637 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561550 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561564 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561576 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561587 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561601 4637 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.561615 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.663055 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.663149 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.663943 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-error\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.663978 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjlfk\" (UniqueName: \"kubernetes.io/projected/e288e57c-c4dc-4134-b964-3e6d4469989f-kube-api-access-bjlfk\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664039 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-login\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664086 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664113 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-session\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664138 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-audit-policies\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664197 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664267 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664294 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664367 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664419 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e288e57c-c4dc-4134-b964-3e6d4469989f-audit-dir\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664446 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.664546 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7950539-3e35-4e74-8a69-c2b3c3ba928f-kube-api-access-t4gzn" (OuterVolumeSpecName: "kube-api-access-t4gzn") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "kube-api-access-t4gzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.665696 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-audit-policies\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.666443 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.666702 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.667673 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.667764 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e288e57c-c4dc-4134-b964-3e6d4469989f-audit-dir\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.668643 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.668999 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.671097 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-login\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.680675 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-user-template-error\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.684527 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.685590 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.686324 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.686820 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.701016 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-session\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.702682 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.702994 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.707616 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e288e57c-c4dc-4134-b964-3e6d4469989f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.722775 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjlfk\" (UniqueName: \"kubernetes.io/projected/e288e57c-c4dc-4134-b964-3e6d4469989f-kube-api-access-bjlfk\") pod \"oauth-openshift-6c5fcdcf5-dnk6w\" (UID: \"e288e57c-c4dc-4134-b964-3e6d4469989f\") " pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.724722 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.739568 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b7950539-3e35-4e74-8a69-c2b3c3ba928f" (UID: "b7950539-3e35-4e74-8a69-c2b3c3ba928f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.765552 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.765583 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.765597 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.765608 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.765618 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.765628 4637 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7950539-3e35-4e74-8a69-c2b3c3ba928f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.765637 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4gzn\" (UniqueName: \"kubernetes.io/projected/b7950539-3e35-4e74-8a69-c2b3c3ba928f-kube-api-access-t4gzn\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.818839 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9m7pd" event={"ID":"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673","Type":"ContainerStarted","Data":"551deabf40ed0c7be20bcfe43e079947e228817ce839817cfcd495039c3febe6"} Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.823948 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kmbq" event={"ID":"cf829738-1178-4c69-add1-22239dd6b4c9","Type":"ContainerStarted","Data":"ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332"} Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.826817 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" event={"ID":"b7950539-3e35-4e74-8a69-c2b3c3ba928f","Type":"ContainerDied","Data":"4096694e623c2ce7a7db3c7651342cea4f694680716d3e885d64dfa3cd20708f"} Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.826898 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kvhqq" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.826965 4637 scope.go:117] "RemoveContainer" containerID="54438a017834f259b9f6b91c4d23d0b07629e2b1718ba73047521709c3728880" Dec 01 14:49:37 crc kubenswrapper[4637]: I1201 14:49:37.848242 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9m7pd" podStartSLOduration=3.8898225870000003 podStartE2EDuration="1m17.848201881s" podCreationTimestamp="2025-12-01 14:48:20 +0000 UTC" firstStartedPulling="2025-12-01 14:48:23.162454099 +0000 UTC m=+153.680162927" lastFinishedPulling="2025-12-01 14:49:37.120833393 +0000 UTC m=+227.638542221" observedRunningTime="2025-12-01 14:49:37.84442674 +0000 UTC m=+228.362135568" watchObservedRunningTime="2025-12-01 14:49:37.848201881 +0000 UTC m=+228.365910709" Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.023104 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.140873 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvhqq"] Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.143707 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kvhqq"] Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.751634 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w"] Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.833650 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79qxf" event={"ID":"f427d1c1-5d40-4473-8295-4f271899dd13","Type":"ContainerStarted","Data":"6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b"} Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.840359 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" event={"ID":"e288e57c-c4dc-4134-b964-3e6d4469989f","Type":"ContainerStarted","Data":"fbb0dca7084cea1483323b7ad88b1f3506e63d78d3bd62e5bb7af78e34ec3167"} Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.874285 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zct" event={"ID":"8023e8dc-e1e9-48d5-b1de-005d6e38e174","Type":"ContainerStarted","Data":"bab648da99f54d6442044d2d536974e817a278d439cf465e9761f8a4eb524140"} Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.878827 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nhdr" event={"ID":"f9bf0fdb-f832-4c97-a1e4-74aace880d56","Type":"ContainerStarted","Data":"fb30f9c5047f9af0e12f923b556d095973f45c9bb228f6ee9640246c532cc441"} Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.886027 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfht5" event={"ID":"90a8e718-fe2f-4e8f-acc6-bb25efde0385","Type":"ContainerStarted","Data":"1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2"} Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.888570 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k4sx" event={"ID":"3b76fb53-14a1-49f9-b120-a4b492ab70fc","Type":"ContainerStarted","Data":"caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432"} Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.898172 4637 generic.go:334] "Generic (PLEG): container finished" podID="cf829738-1178-4c69-add1-22239dd6b4c9" containerID="ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332" exitCode=0 Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.898449 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kmbq" event={"ID":"cf829738-1178-4c69-add1-22239dd6b4c9","Type":"ContainerDied","Data":"ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332"} Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.903109 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wk6j" event={"ID":"85895600-b021-44c3-ac07-f6ccd4f40226","Type":"ContainerStarted","Data":"c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a"} Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.907533 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l2zct" podStartSLOduration=5.162247296 podStartE2EDuration="1m16.907517513s" podCreationTimestamp="2025-12-01 14:48:22 +0000 UTC" firstStartedPulling="2025-12-01 14:48:25.493106476 +0000 UTC m=+156.010815304" lastFinishedPulling="2025-12-01 14:49:37.238376693 +0000 UTC m=+227.756085521" observedRunningTime="2025-12-01 14:49:38.903649128 +0000 UTC m=+229.421357956" watchObservedRunningTime="2025-12-01 14:49:38.907517513 +0000 UTC m=+229.425226341" Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.909268 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-79qxf" podStartSLOduration=7.049525871 podStartE2EDuration="1m19.909259619s" podCreationTimestamp="2025-12-01 14:48:19 +0000 UTC" firstStartedPulling="2025-12-01 14:48:24.27530753 +0000 UTC m=+154.793016358" lastFinishedPulling="2025-12-01 14:49:37.135041278 +0000 UTC m=+227.652750106" observedRunningTime="2025-12-01 14:49:38.868648575 +0000 UTC m=+229.386357403" watchObservedRunningTime="2025-12-01 14:49:38.909259619 +0000 UTC m=+229.426968447" Dec 01 14:49:38 crc kubenswrapper[4637]: I1201 14:49:38.957488 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6nhdr" podStartSLOduration=4.307410095 podStartE2EDuration="1m15.957456574s" podCreationTimestamp="2025-12-01 14:48:23 +0000 UTC" firstStartedPulling="2025-12-01 14:48:25.510782956 +0000 UTC m=+156.028491784" lastFinishedPulling="2025-12-01 14:49:37.160829435 +0000 UTC m=+227.678538263" observedRunningTime="2025-12-01 14:49:38.952497426 +0000 UTC m=+229.470206254" watchObservedRunningTime="2025-12-01 14:49:38.957456574 +0000 UTC m=+229.475165402" Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.019212 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8wk6j" podStartSLOduration=5.079228326 podStartE2EDuration="1m18.019194045s" podCreationTimestamp="2025-12-01 14:48:21 +0000 UTC" firstStartedPulling="2025-12-01 14:48:24.312106034 +0000 UTC m=+154.829814862" lastFinishedPulling="2025-12-01 14:49:37.252071763 +0000 UTC m=+227.769780581" observedRunningTime="2025-12-01 14:49:39.016500139 +0000 UTC m=+229.534208967" watchObservedRunningTime="2025-12-01 14:49:39.019194045 +0000 UTC m=+229.536902873" Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.039618 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bfht5" podStartSLOduration=6.870139388 podStartE2EDuration="1m20.039598361s" podCreationTimestamp="2025-12-01 14:48:19 +0000 UTC" firstStartedPulling="2025-12-01 14:48:24.351550809 +0000 UTC m=+154.869259637" lastFinishedPulling="2025-12-01 14:49:37.521009782 +0000 UTC m=+228.038718610" observedRunningTime="2025-12-01 14:49:39.032548444 +0000 UTC m=+229.550257282" watchObservedRunningTime="2025-12-01 14:49:39.039598361 +0000 UTC m=+229.557307189" Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.778901 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7950539-3e35-4e74-8a69-c2b3c3ba928f" path="/var/lib/kubelet/pods/b7950539-3e35-4e74-8a69-c2b3c3ba928f/volumes" Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.920158 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kmbq" event={"ID":"cf829738-1178-4c69-add1-22239dd6b4c9","Type":"ContainerStarted","Data":"3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd"} Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.921817 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" event={"ID":"e288e57c-c4dc-4134-b964-3e6d4469989f","Type":"ContainerStarted","Data":"01d9157ddd7dda98822039efac4abbe0469b362d57cbcad27a302351477eaca3"} Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.922039 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.924625 4637 generic.go:334] "Generic (PLEG): container finished" podID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerID="caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432" exitCode=0 Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.924674 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k4sx" event={"ID":"3b76fb53-14a1-49f9-b120-a4b492ab70fc","Type":"ContainerDied","Data":"caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432"} Dec 01 14:49:39 crc kubenswrapper[4637]: I1201 14:49:39.955193 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4kmbq" podStartSLOduration=3.781256641 podStartE2EDuration="1m17.955176719s" podCreationTimestamp="2025-12-01 14:48:22 +0000 UTC" firstStartedPulling="2025-12-01 14:48:25.467901246 +0000 UTC m=+155.985610074" lastFinishedPulling="2025-12-01 14:49:39.641821324 +0000 UTC m=+230.159530152" observedRunningTime="2025-12-01 14:49:39.951626185 +0000 UTC m=+230.469335013" watchObservedRunningTime="2025-12-01 14:49:39.955176719 +0000 UTC m=+230.472885547" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.026965 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" podStartSLOduration=30.026944712 podStartE2EDuration="30.026944712s" podCreationTimestamp="2025-12-01 14:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:49:40.013629255 +0000 UTC m=+230.531338083" watchObservedRunningTime="2025-12-01 14:49:40.026944712 +0000 UTC m=+230.544653540" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.348450 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.349539 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.432367 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.432691 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.564993 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.565038 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.759604 4637 patch_prober.go:28] interesting pod/oauth-openshift-6c5fcdcf5-dnk6w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 14:49:40 crc kubenswrapper[4637]: [+]log ok Dec 01 14:49:40 crc kubenswrapper[4637]: [-]poststarthook/max-in-flight-filter failed: reason withheld Dec 01 14:49:40 crc kubenswrapper[4637]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Dec 01 14:49:40 crc kubenswrapper[4637]: [-]poststarthook/openshift.io-StartUserInformer failed: reason withheld Dec 01 14:49:40 crc kubenswrapper[4637]: healthz check failed Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.759671 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" podUID="e288e57c-c4dc-4134-b964-3e6d4469989f" containerName="oauth-openshift" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 14:49:40 crc kubenswrapper[4637]: I1201 14:49:40.952039 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6c5fcdcf5-dnk6w" Dec 01 14:49:41 crc kubenswrapper[4637]: I1201 14:49:41.600439 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bfht5" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="registry-server" probeResult="failure" output=< Dec 01 14:49:41 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 14:49:41 crc kubenswrapper[4637]: > Dec 01 14:49:41 crc kubenswrapper[4637]: I1201 14:49:41.602468 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-79qxf" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="registry-server" probeResult="failure" output=< Dec 01 14:49:41 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 14:49:41 crc kubenswrapper[4637]: > Dec 01 14:49:41 crc kubenswrapper[4637]: I1201 14:49:41.693805 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9m7pd" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="registry-server" probeResult="failure" output=< Dec 01 14:49:41 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 14:49:41 crc kubenswrapper[4637]: > Dec 01 14:49:41 crc kubenswrapper[4637]: I1201 14:49:41.945006 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k4sx" event={"ID":"3b76fb53-14a1-49f9-b120-a4b492ab70fc","Type":"ContainerStarted","Data":"6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4"} Dec 01 14:49:41 crc kubenswrapper[4637]: I1201 14:49:41.964521 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2k4sx" podStartSLOduration=6.741279414 podStartE2EDuration="1m22.964504374s" podCreationTimestamp="2025-12-01 14:48:19 +0000 UTC" firstStartedPulling="2025-12-01 14:48:24.253245486 +0000 UTC m=+154.770954314" lastFinishedPulling="2025-12-01 14:49:40.476470446 +0000 UTC m=+230.994179274" observedRunningTime="2025-12-01 14:49:41.963953057 +0000 UTC m=+232.481661885" watchObservedRunningTime="2025-12-01 14:49:41.964504374 +0000 UTC m=+232.482213192" Dec 01 14:49:42 crc kubenswrapper[4637]: I1201 14:49:42.217546 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:49:42 crc kubenswrapper[4637]: I1201 14:49:42.217634 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:49:42 crc kubenswrapper[4637]: I1201 14:49:42.273064 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:49:42 crc kubenswrapper[4637]: I1201 14:49:42.845197 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:49:42 crc kubenswrapper[4637]: I1201 14:49:42.845914 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:49:42 crc kubenswrapper[4637]: I1201 14:49:42.885832 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:49:42 crc kubenswrapper[4637]: I1201 14:49:42.988728 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:49:43 crc kubenswrapper[4637]: I1201 14:49:43.372319 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:49:43 crc kubenswrapper[4637]: I1201 14:49:43.372417 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:49:43 crc kubenswrapper[4637]: I1201 14:49:43.658055 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:49:43 crc kubenswrapper[4637]: I1201 14:49:43.658403 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:49:44 crc kubenswrapper[4637]: I1201 14:49:44.414404 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l2zct" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="registry-server" probeResult="failure" output=< Dec 01 14:49:44 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 14:49:44 crc kubenswrapper[4637]: > Dec 01 14:49:44 crc kubenswrapper[4637]: I1201 14:49:44.723289 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6nhdr" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="registry-server" probeResult="failure" output=< Dec 01 14:49:44 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 14:49:44 crc kubenswrapper[4637]: > Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.262129 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.262421 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.307132 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.395844 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.441552 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.478589 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.521816 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.609121 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:49:50 crc kubenswrapper[4637]: I1201 14:49:50.652545 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:49:51 crc kubenswrapper[4637]: I1201 14:49:51.104547 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:49:51 crc kubenswrapper[4637]: I1201 14:49:51.819743 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-79qxf"] Dec 01 14:49:52 crc kubenswrapper[4637]: I1201 14:49:52.049360 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-79qxf" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="registry-server" containerID="cri-o://6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b" gracePeriod=2 Dec 01 14:49:52 crc kubenswrapper[4637]: I1201 14:49:52.839711 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9m7pd"] Dec 01 14:49:52 crc kubenswrapper[4637]: I1201 14:49:52.840176 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9m7pd" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="registry-server" containerID="cri-o://551deabf40ed0c7be20bcfe43e079947e228817ce839817cfcd495039c3febe6" gracePeriod=2 Dec 01 14:49:52 crc kubenswrapper[4637]: I1201 14:49:52.923622 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.018271 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.037631 4637 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.037857 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="extract-utilities" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.037873 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="extract-utilities" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.037883 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="registry-server" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.037889 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="registry-server" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.037900 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="extract-content" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.037906 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="extract-content" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.038021 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" containerName="registry-server" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.038364 4637 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.038570 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.038714 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169" gracePeriod=15 Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.038827 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5" gracePeriod=15 Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.038864 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5" gracePeriod=15 Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.038963 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d" gracePeriod=15 Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.038893 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772" gracePeriod=15 Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.043671 4637 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.043874 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.043888 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.043898 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.043904 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.043915 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.043922 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.043958 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.043966 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.043978 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.043985 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.043995 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.044000 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.044104 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.044116 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.044127 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.044136 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.044146 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.044154 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.044329 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.044339 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.073607 4637 generic.go:334] "Generic (PLEG): container finished" podID="f427d1c1-5d40-4473-8295-4f271899dd13" containerID="6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b" exitCode=0 Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.077122 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79qxf" event={"ID":"f427d1c1-5d40-4473-8295-4f271899dd13","Type":"ContainerDied","Data":"6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b"} Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.077163 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79qxf" event={"ID":"f427d1c1-5d40-4473-8295-4f271899dd13","Type":"ContainerDied","Data":"c19abc36efb4b2fe06e5b5feba2adf4d06bc47a7a1075b45e344c24e5c41447d"} Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.077183 4637 scope.go:117] "RemoveContainer" containerID="6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.077348 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79qxf" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.087704 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.102745 4637 generic.go:334] "Generic (PLEG): container finished" podID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerID="551deabf40ed0c7be20bcfe43e079947e228817ce839817cfcd495039c3febe6" exitCode=0 Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.102783 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9m7pd" event={"ID":"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673","Type":"ContainerDied","Data":"551deabf40ed0c7be20bcfe43e079947e228817ce839817cfcd495039c3febe6"} Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.133885 4637 scope.go:117] "RemoveContainer" containerID="5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168090 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-utilities\") pod \"f427d1c1-5d40-4473-8295-4f271899dd13\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168235 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-catalog-content\") pod \"f427d1c1-5d40-4473-8295-4f271899dd13\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168278 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgkx2\" (UniqueName: \"kubernetes.io/projected/f427d1c1-5d40-4473-8295-4f271899dd13-kube-api-access-fgkx2\") pod \"f427d1c1-5d40-4473-8295-4f271899dd13\" (UID: \"f427d1c1-5d40-4473-8295-4f271899dd13\") " Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168557 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168590 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168631 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168662 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168684 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168710 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168729 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.168746 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.177043 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-utilities" (OuterVolumeSpecName: "utilities") pod "f427d1c1-5d40-4473-8295-4f271899dd13" (UID: "f427d1c1-5d40-4473-8295-4f271899dd13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.180648 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f427d1c1-5d40-4473-8295-4f271899dd13-kube-api-access-fgkx2" (OuterVolumeSpecName: "kube-api-access-fgkx2") pod "f427d1c1-5d40-4473-8295-4f271899dd13" (UID: "f427d1c1-5d40-4473-8295-4f271899dd13"). InnerVolumeSpecName "kube-api-access-fgkx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.223193 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f427d1c1-5d40-4473-8295-4f271899dd13" (UID: "f427d1c1-5d40-4473-8295-4f271899dd13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.245886 4637 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.246222 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.253310 4637 scope.go:117] "RemoveContainer" containerID="3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271345 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271422 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271447 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271478 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271502 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271507 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271532 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271544 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271556 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271587 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271606 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271629 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271668 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271694 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271703 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271702 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271721 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f427d1c1-5d40-4473-8295-4f271899dd13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271737 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgkx2\" (UniqueName: \"kubernetes.io/projected/f427d1c1-5d40-4473-8295-4f271899dd13-kube-api-access-fgkx2\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.271743 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.313535 4637 scope.go:117] "RemoveContainer" containerID="6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.315599 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b\": container with ID starting with 6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b not found: ID does not exist" containerID="6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.315625 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b"} err="failed to get container status \"6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b\": rpc error: code = NotFound desc = could not find container \"6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b\": container with ID starting with 6a07cc099a093a1a564bee6d7aa362132dd30d7590aa1bccb26a76ca646cef0b not found: ID does not exist" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.315647 4637 scope.go:117] "RemoveContainer" containerID="5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.316562 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928\": container with ID starting with 5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928 not found: ID does not exist" containerID="5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.316623 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928"} err="failed to get container status \"5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928\": rpc error: code = NotFound desc = could not find container \"5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928\": container with ID starting with 5ae42c91c254a61ed0e67f0313e373df83be8b3016e0f4e22467441a7a32e928 not found: ID does not exist" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.316657 4637 scope.go:117] "RemoveContainer" containerID="3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.317264 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4\": container with ID starting with 3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4 not found: ID does not exist" containerID="3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.317295 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4"} err="failed to get container status \"3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4\": rpc error: code = NotFound desc = could not find container \"3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4\": container with ID starting with 3adb48ca4e0b31e653a73ea4f0e13da56eeceeabd7ff4c3af11d6db3c9e5dcf4 not found: ID does not exist" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.329728 4637 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.330833 4637 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.331091 4637 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.331304 4637 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.331550 4637 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.331574 4637 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.331725 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.363244 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.379381 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.404287 4637 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d1ee3f445ffa9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 14:49:53.403740073 +0000 UTC m=+243.921448901,LastTimestamp:2025-12-01 14:49:53.403740073 +0000 UTC m=+243.921448901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.445758 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.477222 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-utilities\") pod \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.477528 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cghlr\" (UniqueName: \"kubernetes.io/projected/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-kube-api-access-cghlr\") pod \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.477585 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-catalog-content\") pod \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\" (UID: \"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673\") " Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.480495 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-utilities" (OuterVolumeSpecName: "utilities") pod "fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" (UID: "fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.483485 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.502387 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-kube-api-access-cghlr" (OuterVolumeSpecName: "kube-api-access-cghlr") pod "fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" (UID: "fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673"). InnerVolumeSpecName "kube-api-access-cghlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.529004 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" (UID: "fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.533167 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.579019 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.579267 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cghlr\" (UniqueName: \"kubernetes.io/projected/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-kube-api-access-cghlr\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.579382 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.696275 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:49:53 crc kubenswrapper[4637]: I1201 14:49:53.735745 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:49:53 crc kubenswrapper[4637]: E1201 14:49:53.934737 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.108972 4637 generic.go:334] "Generic (PLEG): container finished" podID="73b64284-bacb-455e-9e41-7c7f7deeacfc" containerID="3eb93426d47e9c9b6dc3db18c89b0131a7635fb926510b58dbf07de10c98b562" exitCode=0 Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.109042 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73b64284-bacb-455e-9e41-7c7f7deeacfc","Type":"ContainerDied","Data":"3eb93426d47e9c9b6dc3db18c89b0131a7635fb926510b58dbf07de10c98b562"} Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.111660 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.113191 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.115063 4637 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772" exitCode=0 Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.115094 4637 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d" exitCode=0 Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.115106 4637 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5" exitCode=0 Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.115115 4637 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5" exitCode=2 Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.115169 4637 scope.go:117] "RemoveContainer" containerID="b07ed07ea44212fab6cbf3373530ffd93eaa4798c047c7b163d367dcec98d52c" Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.121294 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d"} Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.121333 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c2bbb865d1b6b138b5c46f3d338d1964965eb4a1da6645452af63fd06c114043"} Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.124298 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9m7pd" event={"ID":"fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673","Type":"ContainerDied","Data":"89e81b40c856ccde9b73c27be3fcaa8e38dd0a9685027a02cf45a0583e8e4dd8"} Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.124510 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9m7pd" Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.157656 4637 scope.go:117] "RemoveContainer" containerID="551deabf40ed0c7be20bcfe43e079947e228817ce839817cfcd495039c3febe6" Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.173368 4637 scope.go:117] "RemoveContainer" containerID="5722a373cbd3feaa4e37234c9ccb02e9636d013cbed1f83dd64bc5ba2052c2c8" Dec 01 14:49:54 crc kubenswrapper[4637]: I1201 14:49:54.216778 4637 scope.go:117] "RemoveContainer" containerID="a789c7eaa84d9bd95ca86e125ad2c86691874c03a84df4a8c0b3e58ce6996cda" Dec 01 14:49:54 crc kubenswrapper[4637]: E1201 14:49:54.736637 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.166889 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.572297 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.743075 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-kubelet-dir\") pod \"73b64284-bacb-455e-9e41-7c7f7deeacfc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.743265 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73b64284-bacb-455e-9e41-7c7f7deeacfc-kube-api-access\") pod \"73b64284-bacb-455e-9e41-7c7f7deeacfc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.743456 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-var-lock\") pod \"73b64284-bacb-455e-9e41-7c7f7deeacfc\" (UID: \"73b64284-bacb-455e-9e41-7c7f7deeacfc\") " Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.743635 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73b64284-bacb-455e-9e41-7c7f7deeacfc" (UID: "73b64284-bacb-455e-9e41-7c7f7deeacfc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.743989 4637 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.744092 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-var-lock" (OuterVolumeSpecName: "var-lock") pod "73b64284-bacb-455e-9e41-7c7f7deeacfc" (UID: "73b64284-bacb-455e-9e41-7c7f7deeacfc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.769277 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b64284-bacb-455e-9e41-7c7f7deeacfc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73b64284-bacb-455e-9e41-7c7f7deeacfc" (UID: "73b64284-bacb-455e-9e41-7c7f7deeacfc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.845746 4637 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73b64284-bacb-455e-9e41-7c7f7deeacfc-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:55 crc kubenswrapper[4637]: I1201 14:49:55.845816 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73b64284-bacb-455e-9e41-7c7f7deeacfc-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.024013 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.024832 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.150548 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.150653 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.150682 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.150686 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.150722 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.150845 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.151009 4637 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.151023 4637 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.151031 4637 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.177905 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73b64284-bacb-455e-9e41-7c7f7deeacfc","Type":"ContainerDied","Data":"495ffa786ae47344261c20f4327d9a99cb824fb442131313c453ffe8929e2bf7"} Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.177942 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.177966 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="495ffa786ae47344261c20f4327d9a99cb824fb442131313c453ffe8929e2bf7" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.181773 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.182662 4637 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169" exitCode=0 Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.182715 4637 scope.go:117] "RemoveContainer" containerID="203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.183317 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.201212 4637 scope.go:117] "RemoveContainer" containerID="5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.216318 4637 scope.go:117] "RemoveContainer" containerID="aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.230288 4637 scope.go:117] "RemoveContainer" containerID="cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.243887 4637 scope.go:117] "RemoveContainer" containerID="e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.260788 4637 scope.go:117] "RemoveContainer" containerID="5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.280193 4637 scope.go:117] "RemoveContainer" containerID="203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772" Dec 01 14:49:56 crc kubenswrapper[4637]: E1201 14:49:56.280924 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\": container with ID starting with 203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772 not found: ID does not exist" containerID="203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.282866 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772"} err="failed to get container status \"203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\": rpc error: code = NotFound desc = could not find container \"203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772\": container with ID starting with 203d535ae0cb12abc00d25db4f9477084317c93d2b5fef0f61c47e914054d772 not found: ID does not exist" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.282897 4637 scope.go:117] "RemoveContainer" containerID="5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d" Dec 01 14:49:56 crc kubenswrapper[4637]: E1201 14:49:56.283418 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\": container with ID starting with 5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d not found: ID does not exist" containerID="5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.283468 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d"} err="failed to get container status \"5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\": rpc error: code = NotFound desc = could not find container \"5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d\": container with ID starting with 5a56591af98165da8d5fb4ecd1679d7dfffb47bb51f66bbc58d232f828e8539d not found: ID does not exist" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.283506 4637 scope.go:117] "RemoveContainer" containerID="aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5" Dec 01 14:49:56 crc kubenswrapper[4637]: E1201 14:49:56.283912 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\": container with ID starting with aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5 not found: ID does not exist" containerID="aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.283961 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5"} err="failed to get container status \"aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\": rpc error: code = NotFound desc = could not find container \"aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5\": container with ID starting with aea56ec859f366c92943d3204ac1e68e77c5a91e2afaa6c97e20b15a68f6c7b5 not found: ID does not exist" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.283982 4637 scope.go:117] "RemoveContainer" containerID="cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5" Dec 01 14:49:56 crc kubenswrapper[4637]: E1201 14:49:56.284280 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\": container with ID starting with cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5 not found: ID does not exist" containerID="cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.284315 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5"} err="failed to get container status \"cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\": rpc error: code = NotFound desc = could not find container \"cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5\": container with ID starting with cf39a1d0d08e28d2d6618dbcdd3c8b89c9200a44417ea1497155a288571067f5 not found: ID does not exist" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.284337 4637 scope.go:117] "RemoveContainer" containerID="e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169" Dec 01 14:49:56 crc kubenswrapper[4637]: E1201 14:49:56.284734 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\": container with ID starting with e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169 not found: ID does not exist" containerID="e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.284828 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169"} err="failed to get container status \"e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\": rpc error: code = NotFound desc = could not find container \"e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169\": container with ID starting with e66f6a9a4efddae5967feb6c0744b57bae9a1d73f4c023471ff78cf3fb144169 not found: ID does not exist" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.284906 4637 scope.go:117] "RemoveContainer" containerID="5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5" Dec 01 14:49:56 crc kubenswrapper[4637]: E1201 14:49:56.285249 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\": container with ID starting with 5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5 not found: ID does not exist" containerID="5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5" Dec 01 14:49:56 crc kubenswrapper[4637]: I1201 14:49:56.285273 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5"} err="failed to get container status \"5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\": rpc error: code = NotFound desc = could not find container \"5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5\": container with ID starting with 5865478ee586cf4a6c60ea5e55f9ce6e613190b936a96444880dd5eec76082a5 not found: ID does not exist" Dec 01 14:49:56 crc kubenswrapper[4637]: E1201 14:49:56.337471 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="3.2s" Dec 01 14:49:56 crc kubenswrapper[4637]: E1201 14:49:56.704054 4637 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d1ee3f445ffa9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 14:49:53.403740073 +0000 UTC m=+243.921448901,LastTimestamp:2025-12-01 14:49:53.403740073 +0000 UTC m=+243.921448901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 14:49:57 crc kubenswrapper[4637]: I1201 14:49:57.787079 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 01 14:49:58 crc kubenswrapper[4637]: I1201 14:49:58.085088 4637 status_manager.go:851] "Failed to get status for pod" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" pod="openshift-marketplace/certified-operators-79qxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-79qxf\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:58 crc kubenswrapper[4637]: I1201 14:49:58.085277 4637 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:58 crc kubenswrapper[4637]: I1201 14:49:58.085493 4637 status_manager.go:851] "Failed to get status for pod" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" pod="openshift-marketplace/redhat-operators-l2zct" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l2zct\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:58 crc kubenswrapper[4637]: I1201 14:49:58.085638 4637 status_manager.go:851] "Failed to get status for pod" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" pod="openshift-marketplace/certified-operators-79qxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-79qxf\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:58 crc kubenswrapper[4637]: I1201 14:49:58.085774 4637 status_manager.go:851] "Failed to get status for pod" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" pod="openshift-marketplace/community-operators-9m7pd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9m7pd\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:58 crc kubenswrapper[4637]: I1201 14:49:58.085981 4637 status_manager.go:851] "Failed to get status for pod" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" pod="openshift-marketplace/redhat-operators-6nhdr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nhdr\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:58 crc kubenswrapper[4637]: I1201 14:49:58.086159 4637 status_manager.go:851] "Failed to get status for pod" podUID="73b64284-bacb-455e-9e41-7c7f7deeacfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:58 crc kubenswrapper[4637]: I1201 14:49:58.086313 4637 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:59 crc kubenswrapper[4637]: E1201 14:49:59.538178 4637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="6.4s" Dec 01 14:49:59 crc kubenswrapper[4637]: I1201 14:49:59.774526 4637 status_manager.go:851] "Failed to get status for pod" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" pod="openshift-marketplace/certified-operators-79qxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-79qxf\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:59 crc kubenswrapper[4637]: I1201 14:49:59.774809 4637 status_manager.go:851] "Failed to get status for pod" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" pod="openshift-marketplace/redhat-operators-l2zct" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l2zct\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:59 crc kubenswrapper[4637]: I1201 14:49:59.775022 4637 status_manager.go:851] "Failed to get status for pod" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" pod="openshift-marketplace/community-operators-9m7pd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9m7pd\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:59 crc kubenswrapper[4637]: I1201 14:49:59.775226 4637 status_manager.go:851] "Failed to get status for pod" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" pod="openshift-marketplace/redhat-operators-6nhdr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nhdr\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:59 crc kubenswrapper[4637]: I1201 14:49:59.775571 4637 status_manager.go:851] "Failed to get status for pod" podUID="73b64284-bacb-455e-9e41-7c7f7deeacfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:49:59 crc kubenswrapper[4637]: I1201 14:49:59.776154 4637 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.770370 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.772955 4637 status_manager.go:851] "Failed to get status for pod" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" pod="openshift-marketplace/certified-operators-79qxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-79qxf\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.773532 4637 status_manager.go:851] "Failed to get status for pod" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" pod="openshift-marketplace/redhat-operators-l2zct" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l2zct\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.774451 4637 status_manager.go:851] "Failed to get status for pod" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" pod="openshift-marketplace/community-operators-9m7pd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9m7pd\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.775071 4637 status_manager.go:851] "Failed to get status for pod" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" pod="openshift-marketplace/redhat-operators-6nhdr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nhdr\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.775412 4637 status_manager.go:851] "Failed to get status for pod" podUID="73b64284-bacb-455e-9e41-7c7f7deeacfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.775782 4637 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.785684 4637 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.785809 4637 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:04 crc kubenswrapper[4637]: E1201 14:50:04.786277 4637 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:04 crc kubenswrapper[4637]: I1201 14:50:04.786716 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.246786 4637 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c088150c252f85e6a4e1cc3cca3f8db77fd1516bd905ae315c6cd509ec47b67c" exitCode=0 Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.246831 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c088150c252f85e6a4e1cc3cca3f8db77fd1516bd905ae315c6cd509ec47b67c"} Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.246859 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2dad513e45fc022b3909d0b6a8240fbe3b70018a8f0ead3a8909f76e2af8957c"} Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.247141 4637 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.247153 4637 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:05 crc kubenswrapper[4637]: E1201 14:50:05.247401 4637 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.247740 4637 status_manager.go:851] "Failed to get status for pod" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" pod="openshift-marketplace/certified-operators-79qxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-79qxf\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.248352 4637 status_manager.go:851] "Failed to get status for pod" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" pod="openshift-marketplace/redhat-operators-l2zct" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-l2zct\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.248559 4637 status_manager.go:851] "Failed to get status for pod" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" pod="openshift-marketplace/community-operators-9m7pd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9m7pd\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.248746 4637 status_manager.go:851] "Failed to get status for pod" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" pod="openshift-marketplace/redhat-operators-6nhdr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nhdr\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.249001 4637 status_manager.go:851] "Failed to get status for pod" podUID="73b64284-bacb-455e-9e41-7c7f7deeacfc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:05 crc kubenswrapper[4637]: I1201 14:50:05.249173 4637 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Dec 01 14:50:06 crc kubenswrapper[4637]: I1201 14:50:06.284103 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b7febe5b866ab741d9ee19bf6fbe355caf3cc30461af3b470d3ab45d6bd3b58"} Dec 01 14:50:06 crc kubenswrapper[4637]: I1201 14:50:06.284158 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"894de83bab28f10754d6093f72eccc0129ce0499834c302e9eb2ce7e7c4b9c43"} Dec 01 14:50:06 crc kubenswrapper[4637]: I1201 14:50:06.284173 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8fb9d36300f731d55a2087837a320b51610412a27174ad4ab01f48f7c39ff2d7"} Dec 01 14:50:06 crc kubenswrapper[4637]: I1201 14:50:06.284184 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e095478e4f20767947d9faae5573fc6a83b80ef2547d345ffda332d6b4c66aaa"} Dec 01 14:50:07 crc kubenswrapper[4637]: I1201 14:50:07.296337 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"784df986f091d61e59b6c384d18187dc49693c4bf69cdf5b11cce90f85e88731"} Dec 01 14:50:07 crc kubenswrapper[4637]: I1201 14:50:07.297670 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:07 crc kubenswrapper[4637]: I1201 14:50:07.296783 4637 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:07 crc kubenswrapper[4637]: I1201 14:50:07.297855 4637 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:08 crc kubenswrapper[4637]: I1201 14:50:08.306369 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 14:50:08 crc kubenswrapper[4637]: I1201 14:50:08.306412 4637 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245" exitCode=1 Dec 01 14:50:08 crc kubenswrapper[4637]: I1201 14:50:08.306449 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245"} Dec 01 14:50:08 crc kubenswrapper[4637]: I1201 14:50:08.306903 4637 scope.go:117] "RemoveContainer" containerID="06730a2d820db02b42bfae161449ebd42709545e97e59b40e797641c45ab3245" Dec 01 14:50:09 crc kubenswrapper[4637]: I1201 14:50:09.317450 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 14:50:09 crc kubenswrapper[4637]: I1201 14:50:09.317526 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b7ec080efb23c9cac2d9e40c07867f2b7b44336c368ffc8972442dc956f225e3"} Dec 01 14:50:09 crc kubenswrapper[4637]: I1201 14:50:09.788026 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:09 crc kubenswrapper[4637]: I1201 14:50:09.788139 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:09 crc kubenswrapper[4637]: I1201 14:50:09.793905 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:12 crc kubenswrapper[4637]: I1201 14:50:12.309591 4637 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:12 crc kubenswrapper[4637]: I1201 14:50:12.332875 4637 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:12 crc kubenswrapper[4637]: I1201 14:50:12.332902 4637 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:12 crc kubenswrapper[4637]: I1201 14:50:12.338018 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:12 crc kubenswrapper[4637]: I1201 14:50:12.432189 4637 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e164f0db-0765-42c1-b5e0-29506bf76d06" Dec 01 14:50:13 crc kubenswrapper[4637]: I1201 14:50:13.337536 4637 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:13 crc kubenswrapper[4637]: I1201 14:50:13.337846 4637 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7e564b19-2536-41be-874f-622840ea7ea1" Dec 01 14:50:13 crc kubenswrapper[4637]: I1201 14:50:13.345203 4637 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e164f0db-0765-42c1-b5e0-29506bf76d06" Dec 01 14:50:13 crc kubenswrapper[4637]: I1201 14:50:13.937652 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:50:16 crc kubenswrapper[4637]: I1201 14:50:16.764698 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:50:16 crc kubenswrapper[4637]: I1201 14:50:16.770355 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:50:19 crc kubenswrapper[4637]: I1201 14:50:19.295725 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 14:50:22 crc kubenswrapper[4637]: I1201 14:50:22.046949 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 14:50:22 crc kubenswrapper[4637]: I1201 14:50:22.353215 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 14:50:23 crc kubenswrapper[4637]: I1201 14:50:23.515849 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 14:50:23 crc kubenswrapper[4637]: I1201 14:50:23.516245 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 14:50:23 crc kubenswrapper[4637]: I1201 14:50:23.532602 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 14:50:23 crc kubenswrapper[4637]: I1201 14:50:23.569770 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 14:50:23 crc kubenswrapper[4637]: I1201 14:50:23.620417 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 14:50:23 crc kubenswrapper[4637]: I1201 14:50:23.724006 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 14:50:23 crc kubenswrapper[4637]: I1201 14:50:23.779063 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 14:50:23 crc kubenswrapper[4637]: I1201 14:50:23.942259 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 14:50:24 crc kubenswrapper[4637]: I1201 14:50:24.176684 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 14:50:24 crc kubenswrapper[4637]: I1201 14:50:24.296980 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 14:50:24 crc kubenswrapper[4637]: I1201 14:50:24.435591 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 14:50:24 crc kubenswrapper[4637]: I1201 14:50:24.597869 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 14:50:24 crc kubenswrapper[4637]: I1201 14:50:24.699540 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 14:50:24 crc kubenswrapper[4637]: I1201 14:50:24.984945 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 14:50:25 crc kubenswrapper[4637]: I1201 14:50:25.286318 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 14:50:25 crc kubenswrapper[4637]: I1201 14:50:25.294537 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 14:50:25 crc kubenswrapper[4637]: I1201 14:50:25.431950 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 14:50:25 crc kubenswrapper[4637]: I1201 14:50:25.588463 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 14:50:25 crc kubenswrapper[4637]: I1201 14:50:25.618864 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 14:50:25 crc kubenswrapper[4637]: I1201 14:50:25.810973 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 14:50:26 crc kubenswrapper[4637]: I1201 14:50:26.037035 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 14:50:26 crc kubenswrapper[4637]: I1201 14:50:26.188739 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 14:50:26 crc kubenswrapper[4637]: I1201 14:50:26.386498 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 14:50:26 crc kubenswrapper[4637]: I1201 14:50:26.626008 4637 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 14:50:26 crc kubenswrapper[4637]: I1201 14:50:26.766061 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 14:50:26 crc kubenswrapper[4637]: I1201 14:50:26.783474 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 14:50:26 crc kubenswrapper[4637]: I1201 14:50:26.824811 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 14:50:26 crc kubenswrapper[4637]: I1201 14:50:26.900501 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.056035 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.171172 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.171716 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.226518 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.330453 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.434906 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.530733 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.723455 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.790429 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.885983 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.911763 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.959560 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 14:50:27 crc kubenswrapper[4637]: I1201 14:50:27.989274 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.051461 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.069486 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.075337 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.166284 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.179278 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.199780 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.209701 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.270475 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.271862 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.436467 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.454776 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.578800 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.634619 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.667957 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.701444 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.703151 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.726352 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.785301 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.858257 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.859295 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.864442 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.927704 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 14:50:28 crc kubenswrapper[4637]: I1201 14:50:28.933268 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.043586 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.053145 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.189580 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.231321 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.245688 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.254624 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.344164 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.365271 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.447222 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.452672 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.460078 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.481252 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.513085 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.615545 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.635732 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.657008 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.676152 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.744697 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.789361 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.900874 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.914192 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 14:50:29 crc kubenswrapper[4637]: I1201 14:50:29.985731 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.117200 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.119959 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.145218 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.178171 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.506428 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.634349 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.688452 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.795977 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.865145 4637 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.966676 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 14:50:30 crc kubenswrapper[4637]: I1201 14:50:30.988027 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.041990 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.042094 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.200755 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.207499 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.209473 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.437379 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.445150 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.456215 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.526085 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.560645 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.630380 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.652326 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.721136 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.735859 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.753262 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.808496 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.835833 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 14:50:31 crc kubenswrapper[4637]: I1201 14:50:31.942591 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.037919 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.187953 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.197757 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.210890 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.318998 4637 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.336977 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.389679 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.443979 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.469626 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.495856 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.527165 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.529250 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.553722 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.617778 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.695654 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.767653 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.781961 4637 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.857145 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 14:50:32 crc kubenswrapper[4637]: I1201 14:50:32.896473 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.001678 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.261268 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.285832 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.342257 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.382739 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.458339 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.498570 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.502520 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.534058 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.623556 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.660530 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.798959 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.812917 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.816505 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.817388 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.818573 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.843053 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.899406 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.906868 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.964746 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.969267 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 14:50:33 crc kubenswrapper[4637]: I1201 14:50:33.969880 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.049564 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.128331 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.137861 4637 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.144903 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.160186 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.356860 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.418106 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.439117 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.525119 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.528832 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.603157 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.622475 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.667661 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.669080 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.703187 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.806663 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.815397 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.840481 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.854700 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.928882 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.960355 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 14:50:34 crc kubenswrapper[4637]: I1201 14:50:34.961306 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.013822 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.024237 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.061250 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.069479 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.135867 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.211417 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.298660 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.585008 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.604756 4637 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.606651 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.606633464 podStartE2EDuration="42.606633464s" podCreationTimestamp="2025-12-01 14:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:50:12.327421389 +0000 UTC m=+262.845130217" watchObservedRunningTime="2025-12-01 14:50:35.606633464 +0000 UTC m=+286.124342292" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.609358 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/certified-operators-79qxf","openshift-marketplace/community-operators-9m7pd"] Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.609411 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.614172 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.639062 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.638811557 podStartE2EDuration="23.638811557s" podCreationTimestamp="2025-12-01 14:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:50:35.63561826 +0000 UTC m=+286.153327088" watchObservedRunningTime="2025-12-01 14:50:35.638811557 +0000 UTC m=+286.156520405" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.728744 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.762760 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.779347 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f427d1c1-5d40-4473-8295-4f271899dd13" path="/var/lib/kubelet/pods/f427d1c1-5d40-4473-8295-4f271899dd13/volumes" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.780157 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" path="/var/lib/kubelet/pods/fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673/volumes" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.805714 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.810074 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.812020 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.891653 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.930684 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.931291 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.976729 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 14:50:35 crc kubenswrapper[4637]: I1201 14:50:35.996995 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.116517 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.171774 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.235322 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.313119 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.425909 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.513749 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.540587 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.545313 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.564323 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.670017 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.703375 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.716305 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.745740 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.770278 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.826743 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.865828 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.867436 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.934208 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 14:50:36 crc kubenswrapper[4637]: I1201 14:50:36.954963 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.000014 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.027617 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.072900 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.089872 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.139800 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.184146 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.342130 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.466880 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.561466 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.593380 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2k4sx"] Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.593591 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2k4sx" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerName="registry-server" containerID="cri-o://6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4" gracePeriod=30 Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.608482 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfht5"] Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.608785 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bfht5" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="registry-server" containerID="cri-o://1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2" gracePeriod=30 Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.631757 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-244ll"] Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.632071 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" containerID="cri-o://df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b" gracePeriod=30 Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.648310 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kmbq"] Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.648625 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4kmbq" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" containerName="registry-server" containerID="cri-o://3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd" gracePeriod=30 Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.666013 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.666418 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wk6j"] Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.666744 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8wk6j" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" containerName="registry-server" containerID="cri-o://c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a" gracePeriod=30 Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.679443 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nhdr"] Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.679713 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6nhdr" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="registry-server" containerID="cri-o://fb30f9c5047f9af0e12f923b556d095973f45c9bb228f6ee9640246c532cc441" gracePeriod=30 Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.694605 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2zct"] Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.694853 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l2zct" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="registry-server" containerID="cri-o://bab648da99f54d6442044d2d536974e817a278d439cf465e9761f8a4eb524140" gracePeriod=30 Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.713998 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kwzk"] Dec 01 14:50:37 crc kubenswrapper[4637]: E1201 14:50:37.714251 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b64284-bacb-455e-9e41-7c7f7deeacfc" containerName="installer" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.714262 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b64284-bacb-455e-9e41-7c7f7deeacfc" containerName="installer" Dec 01 14:50:37 crc kubenswrapper[4637]: E1201 14:50:37.714277 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="extract-utilities" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.714286 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="extract-utilities" Dec 01 14:50:37 crc kubenswrapper[4637]: E1201 14:50:37.714297 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="extract-content" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.714306 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="extract-content" Dec 01 14:50:37 crc kubenswrapper[4637]: E1201 14:50:37.714317 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="registry-server" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.714325 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="registry-server" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.714444 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b64284-bacb-455e-9e41-7c7f7deeacfc" containerName="installer" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.714456 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7bf6cd-51f3-4b0f-8fcf-e1e4fa487673" containerName="registry-server" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.714890 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.752490 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kwzk"] Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.810311 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c222b01-860c-4973-9a37-7abcbfdf910f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.810403 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnds9\" (UniqueName: \"kubernetes.io/projected/1c222b01-860c-4973-9a37-7abcbfdf910f-kube-api-access-dnds9\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.810489 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1c222b01-860c-4973-9a37-7abcbfdf910f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.911635 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c222b01-860c-4973-9a37-7abcbfdf910f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.911715 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnds9\" (UniqueName: \"kubernetes.io/projected/1c222b01-860c-4973-9a37-7abcbfdf910f-kube-api-access-dnds9\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.911824 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1c222b01-860c-4973-9a37-7abcbfdf910f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.916381 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c222b01-860c-4973-9a37-7abcbfdf910f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.934967 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnds9\" (UniqueName: \"kubernetes.io/projected/1c222b01-860c-4973-9a37-7abcbfdf910f-kube-api-access-dnds9\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:37 crc kubenswrapper[4637]: I1201 14:50:37.935009 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1c222b01-860c-4973-9a37-7abcbfdf910f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8kwzk\" (UID: \"1c222b01-860c-4973-9a37-7abcbfdf910f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.153982 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.185920 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.201700 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.241754 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.245048 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.253017 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.258464 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.282812 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.294460 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321190 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-utilities\") pod \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321616 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-utilities\") pod \"cf829738-1178-4c69-add1-22239dd6b4c9\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321647 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-catalog-content\") pod \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321685 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws2gn\" (UniqueName: \"kubernetes.io/projected/90a8e718-fe2f-4e8f-acc6-bb25efde0385-kube-api-access-ws2gn\") pod \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\" (UID: \"90a8e718-fe2f-4e8f-acc6-bb25efde0385\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321732 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-catalog-content\") pod \"85895600-b021-44c3-ac07-f6ccd4f40226\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321753 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-utilities\") pod \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321775 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d56jg\" (UniqueName: \"kubernetes.io/projected/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-kube-api-access-d56jg\") pod \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321827 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-trusted-ca\") pod \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321851 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxdm7\" (UniqueName: \"kubernetes.io/projected/3b76fb53-14a1-49f9-b120-a4b492ab70fc-kube-api-access-gxdm7\") pod \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321887 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-utilities\") pod \"85895600-b021-44c3-ac07-f6ccd4f40226\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.321909 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-catalog-content\") pod \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\" (UID: \"3b76fb53-14a1-49f9-b120-a4b492ab70fc\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.322005 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tfkn\" (UniqueName: \"kubernetes.io/projected/cf829738-1178-4c69-add1-22239dd6b4c9-kube-api-access-8tfkn\") pod \"cf829738-1178-4c69-add1-22239dd6b4c9\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.322026 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-catalog-content\") pod \"cf829738-1178-4c69-add1-22239dd6b4c9\" (UID: \"cf829738-1178-4c69-add1-22239dd6b4c9\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.322179 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8vg6\" (UniqueName: \"kubernetes.io/projected/85895600-b021-44c3-ac07-f6ccd4f40226-kube-api-access-z8vg6\") pod \"85895600-b021-44c3-ac07-f6ccd4f40226\" (UID: \"85895600-b021-44c3-ac07-f6ccd4f40226\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.322252 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-operator-metrics\") pod \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\" (UID: \"6e99277b-aa2d-4f8d-a2f9-aeb954080a27\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.338214 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-utilities" (OuterVolumeSpecName: "utilities") pod "cf829738-1178-4c69-add1-22239dd6b4c9" (UID: "cf829738-1178-4c69-add1-22239dd6b4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.338916 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-utilities" (OuterVolumeSpecName: "utilities") pod "90a8e718-fe2f-4e8f-acc6-bb25efde0385" (UID: "90a8e718-fe2f-4e8f-acc6-bb25efde0385"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.339157 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "6e99277b-aa2d-4f8d-a2f9-aeb954080a27" (UID: "6e99277b-aa2d-4f8d-a2f9-aeb954080a27"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.341828 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-utilities" (OuterVolumeSpecName: "utilities") pod "85895600-b021-44c3-ac07-f6ccd4f40226" (UID: "85895600-b021-44c3-ac07-f6ccd4f40226"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.350346 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf829738-1178-4c69-add1-22239dd6b4c9" (UID: "cf829738-1178-4c69-add1-22239dd6b4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.361968 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-utilities" (OuterVolumeSpecName: "utilities") pod "3b76fb53-14a1-49f9-b120-a4b492ab70fc" (UID: "3b76fb53-14a1-49f9-b120-a4b492ab70fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.378822 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-kube-api-access-d56jg" (OuterVolumeSpecName: "kube-api-access-d56jg") pod "6e99277b-aa2d-4f8d-a2f9-aeb954080a27" (UID: "6e99277b-aa2d-4f8d-a2f9-aeb954080a27"). InnerVolumeSpecName "kube-api-access-d56jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.392656 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.399626 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "6e99277b-aa2d-4f8d-a2f9-aeb954080a27" (UID: "6e99277b-aa2d-4f8d-a2f9-aeb954080a27"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.399955 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85895600-b021-44c3-ac07-f6ccd4f40226-kube-api-access-z8vg6" (OuterVolumeSpecName: "kube-api-access-z8vg6") pod "85895600-b021-44c3-ac07-f6ccd4f40226" (UID: "85895600-b021-44c3-ac07-f6ccd4f40226"). InnerVolumeSpecName "kube-api-access-z8vg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.402575 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b76fb53-14a1-49f9-b120-a4b492ab70fc-kube-api-access-gxdm7" (OuterVolumeSpecName: "kube-api-access-gxdm7") pod "3b76fb53-14a1-49f9-b120-a4b492ab70fc" (UID: "3b76fb53-14a1-49f9-b120-a4b492ab70fc"). InnerVolumeSpecName "kube-api-access-gxdm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.407082 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a8e718-fe2f-4e8f-acc6-bb25efde0385-kube-api-access-ws2gn" (OuterVolumeSpecName: "kube-api-access-ws2gn") pod "90a8e718-fe2f-4e8f-acc6-bb25efde0385" (UID: "90a8e718-fe2f-4e8f-acc6-bb25efde0385"). InnerVolumeSpecName "kube-api-access-ws2gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.410372 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.413071 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85895600-b021-44c3-ac07-f6ccd4f40226" (UID: "85895600-b021-44c3-ac07-f6ccd4f40226"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424289 4637 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424319 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxdm7\" (UniqueName: \"kubernetes.io/projected/3b76fb53-14a1-49f9-b120-a4b492ab70fc-kube-api-access-gxdm7\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424329 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424340 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424349 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8vg6\" (UniqueName: \"kubernetes.io/projected/85895600-b021-44c3-ac07-f6ccd4f40226-kube-api-access-z8vg6\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424357 4637 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424366 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf829738-1178-4c69-add1-22239dd6b4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424374 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424382 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws2gn\" (UniqueName: \"kubernetes.io/projected/90a8e718-fe2f-4e8f-acc6-bb25efde0385-kube-api-access-ws2gn\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424390 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85895600-b021-44c3-ac07-f6ccd4f40226-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424399 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.424406 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d56jg\" (UniqueName: \"kubernetes.io/projected/6e99277b-aa2d-4f8d-a2f9-aeb954080a27-kube-api-access-d56jg\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.426888 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf829738-1178-4c69-add1-22239dd6b4c9-kube-api-access-8tfkn" (OuterVolumeSpecName: "kube-api-access-8tfkn") pod "cf829738-1178-4c69-add1-22239dd6b4c9" (UID: "cf829738-1178-4c69-add1-22239dd6b4c9"). InnerVolumeSpecName "kube-api-access-8tfkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.457810 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b76fb53-14a1-49f9-b120-a4b492ab70fc" (UID: "3b76fb53-14a1-49f9-b120-a4b492ab70fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.460692 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90a8e718-fe2f-4e8f-acc6-bb25efde0385" (UID: "90a8e718-fe2f-4e8f-acc6-bb25efde0385"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.496082 4637 generic.go:334] "Generic (PLEG): container finished" podID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerID="1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2" exitCode=0 Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.496153 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfht5" event={"ID":"90a8e718-fe2f-4e8f-acc6-bb25efde0385","Type":"ContainerDied","Data":"1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.496189 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfht5" event={"ID":"90a8e718-fe2f-4e8f-acc6-bb25efde0385","Type":"ContainerDied","Data":"0b65539f72dc08a21c6e8f068a0e7166d1e65664aedf96dc7ea245dcd4ba7b28"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.496212 4637 scope.go:117] "RemoveContainer" containerID="1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.496348 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfht5" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.502316 4637 generic.go:334] "Generic (PLEG): container finished" podID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerID="6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4" exitCode=0 Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.502390 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k4sx" event={"ID":"3b76fb53-14a1-49f9-b120-a4b492ab70fc","Type":"ContainerDied","Data":"6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.502420 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k4sx" event={"ID":"3b76fb53-14a1-49f9-b120-a4b492ab70fc","Type":"ContainerDied","Data":"3569cbc6675d10ace0780e4d5e38e0fd02331ec95674addf40c36e39709f91d3"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.502512 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2k4sx" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.517105 4637 generic.go:334] "Generic (PLEG): container finished" podID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerID="df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b" exitCode=0 Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.517221 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" event={"ID":"6e99277b-aa2d-4f8d-a2f9-aeb954080a27","Type":"ContainerDied","Data":"df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.517263 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" event={"ID":"6e99277b-aa2d-4f8d-a2f9-aeb954080a27","Type":"ContainerDied","Data":"0ae5758cfe97973212295788a53d3abda0d20f0d871df2eeed7cf1293d2fc0e5"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.517363 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-244ll" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.525604 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b76fb53-14a1-49f9-b120-a4b492ab70fc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.525629 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tfkn\" (UniqueName: \"kubernetes.io/projected/cf829738-1178-4c69-add1-22239dd6b4c9-kube-api-access-8tfkn\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.525641 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a8e718-fe2f-4e8f-acc6-bb25efde0385-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.526738 4637 generic.go:334] "Generic (PLEG): container finished" podID="cf829738-1178-4c69-add1-22239dd6b4c9" containerID="3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd" exitCode=0 Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.526887 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kmbq" event={"ID":"cf829738-1178-4c69-add1-22239dd6b4c9","Type":"ContainerDied","Data":"3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.526954 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kmbq" event={"ID":"cf829738-1178-4c69-add1-22239dd6b4c9","Type":"ContainerDied","Data":"ace25cb229f206c69fae103b01f4bbf9ce1d22a3165706f8c8c888949d433f13"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.527068 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kmbq" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.530224 4637 generic.go:334] "Generic (PLEG): container finished" podID="85895600-b021-44c3-ac07-f6ccd4f40226" containerID="c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a" exitCode=0 Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.530290 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wk6j" event={"ID":"85895600-b021-44c3-ac07-f6ccd4f40226","Type":"ContainerDied","Data":"c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.530325 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wk6j" event={"ID":"85895600-b021-44c3-ac07-f6ccd4f40226","Type":"ContainerDied","Data":"2579a8e908f04d04ac286dc28e30c16d3a60fd0e8ce974c663db05503f1cddc3"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.530410 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8wk6j" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.540734 4637 generic.go:334] "Generic (PLEG): container finished" podID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerID="bab648da99f54d6442044d2d536974e817a278d439cf465e9761f8a4eb524140" exitCode=0 Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.540806 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zct" event={"ID":"8023e8dc-e1e9-48d5-b1de-005d6e38e174","Type":"ContainerDied","Data":"bab648da99f54d6442044d2d536974e817a278d439cf465e9761f8a4eb524140"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.550238 4637 scope.go:117] "RemoveContainer" containerID="c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.552910 4637 generic.go:334] "Generic (PLEG): container finished" podID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerID="fb30f9c5047f9af0e12f923b556d095973f45c9bb228f6ee9640246c532cc441" exitCode=0 Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.553156 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nhdr" event={"ID":"f9bf0fdb-f832-4c97-a1e4-74aace880d56","Type":"ContainerDied","Data":"fb30f9c5047f9af0e12f923b556d095973f45c9bb228f6ee9640246c532cc441"} Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.563250 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfht5"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.566657 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bfht5"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.574956 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.581010 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2k4sx"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.585600 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.587074 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2k4sx"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.597832 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wk6j"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.601277 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wk6j"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.609317 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.616445 4637 scope.go:117] "RemoveContainer" containerID="87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.621054 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kmbq"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.627761 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kmbq"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.637727 4637 scope.go:117] "RemoveContainer" containerID="1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.640209 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2\": container with ID starting with 1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2 not found: ID does not exist" containerID="1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.640253 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2"} err="failed to get container status \"1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2\": rpc error: code = NotFound desc = could not find container \"1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2\": container with ID starting with 1f21d9c09ffa1d58944d6cbcc0f5b6702c22f8cb91f68f5dffe1babbd07826c2 not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.640287 4637 scope.go:117] "RemoveContainer" containerID="c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.640500 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.644564 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-244ll"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.656546 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-244ll"] Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.661337 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59\": container with ID starting with c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59 not found: ID does not exist" containerID="c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.661386 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59"} err="failed to get container status \"c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59\": rpc error: code = NotFound desc = could not find container \"c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59\": container with ID starting with c9204562a43459cebfae9c844c6f8aaee4ff880cddf2192897c27c7e8261dc59 not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.661419 4637 scope.go:117] "RemoveContainer" containerID="87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.661978 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51\": container with ID starting with 87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51 not found: ID does not exist" containerID="87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.662018 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51"} err="failed to get container status \"87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51\": rpc error: code = NotFound desc = could not find container \"87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51\": container with ID starting with 87b20f5bbfc41ee5f8783f4b28c9aa0721e7cc277431d694c2a9eb139697ac51 not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.662051 4637 scope.go:117] "RemoveContainer" containerID="6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.691311 4637 scope.go:117] "RemoveContainer" containerID="caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.718197 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.719243 4637 scope.go:117] "RemoveContainer" containerID="a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.726160 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8kwzk"] Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.729121 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-catalog-content\") pod \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.729277 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-utilities\") pod \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.729322 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg8bp\" (UniqueName: \"kubernetes.io/projected/8023e8dc-e1e9-48d5-b1de-005d6e38e174-kube-api-access-xg8bp\") pod \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\" (UID: \"8023e8dc-e1e9-48d5-b1de-005d6e38e174\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.730420 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-utilities" (OuterVolumeSpecName: "utilities") pod "8023e8dc-e1e9-48d5-b1de-005d6e38e174" (UID: "8023e8dc-e1e9-48d5-b1de-005d6e38e174"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.733078 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8023e8dc-e1e9-48d5-b1de-005d6e38e174-kube-api-access-xg8bp" (OuterVolumeSpecName: "kube-api-access-xg8bp") pod "8023e8dc-e1e9-48d5-b1de-005d6e38e174" (UID: "8023e8dc-e1e9-48d5-b1de-005d6e38e174"). InnerVolumeSpecName "kube-api-access-xg8bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.802798 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.804559 4637 scope.go:117] "RemoveContainer" containerID="6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.805714 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4\": container with ID starting with 6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4 not found: ID does not exist" containerID="6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.805907 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4"} err="failed to get container status \"6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4\": rpc error: code = NotFound desc = could not find container \"6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4\": container with ID starting with 6c92435f77f8ad2059a93fa0f96f72c7f68de242a728c533752e7841cabce2a4 not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.806075 4637 scope.go:117] "RemoveContainer" containerID="caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.806647 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432\": container with ID starting with caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432 not found: ID does not exist" containerID="caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.806741 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432"} err="failed to get container status \"caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432\": rpc error: code = NotFound desc = could not find container \"caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432\": container with ID starting with caaee8df49d80fd20d83df86fac12bfcf5d7b9c319eed96ee2aca64b00f48432 not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.806782 4637 scope.go:117] "RemoveContainer" containerID="a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.807536 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.807278 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c\": container with ID starting with a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c not found: ID does not exist" containerID="a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.807778 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c"} err="failed to get container status \"a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c\": rpc error: code = NotFound desc = could not find container \"a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c\": container with ID starting with a2f8426c415f4f11fc2bc5b60d36c453842a51acd8a5aaf70b780aee36946c4c not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.807855 4637 scope.go:117] "RemoveContainer" containerID="df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.814160 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.832158 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-utilities\") pod \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.832872 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9cfx\" (UniqueName: \"kubernetes.io/projected/f9bf0fdb-f832-4c97-a1e4-74aace880d56-kube-api-access-l9cfx\") pod \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.833114 4637 scope.go:117] "RemoveContainer" containerID="df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.833134 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-catalog-content\") pod \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\" (UID: \"f9bf0fdb-f832-4c97-a1e4-74aace880d56\") " Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.833561 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.833583 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg8bp\" (UniqueName: \"kubernetes.io/projected/8023e8dc-e1e9-48d5-b1de-005d6e38e174-kube-api-access-xg8bp\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.833628 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.833635 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b\": container with ID starting with df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b not found: ID does not exist" containerID="df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.833691 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b"} err="failed to get container status \"df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b\": rpc error: code = NotFound desc = could not find container \"df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b\": container with ID starting with df466ebd6e91862047f9fc1afefa0f41c383aa719d1527b6f8d874f1961d0a2b not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.833730 4637 scope.go:117] "RemoveContainer" containerID="3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.836811 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-utilities" (OuterVolumeSpecName: "utilities") pod "f9bf0fdb-f832-4c97-a1e4-74aace880d56" (UID: "f9bf0fdb-f832-4c97-a1e4-74aace880d56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.842247 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9bf0fdb-f832-4c97-a1e4-74aace880d56-kube-api-access-l9cfx" (OuterVolumeSpecName: "kube-api-access-l9cfx") pod "f9bf0fdb-f832-4c97-a1e4-74aace880d56" (UID: "f9bf0fdb-f832-4c97-a1e4-74aace880d56"). InnerVolumeSpecName "kube-api-access-l9cfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.875888 4637 scope.go:117] "RemoveContainer" containerID="ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.883756 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8023e8dc-e1e9-48d5-b1de-005d6e38e174" (UID: "8023e8dc-e1e9-48d5-b1de-005d6e38e174"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.894365 4637 scope.go:117] "RemoveContainer" containerID="97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.913677 4637 scope.go:117] "RemoveContainer" containerID="3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.915899 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd\": container with ID starting with 3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd not found: ID does not exist" containerID="3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.916073 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd"} err="failed to get container status \"3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd\": rpc error: code = NotFound desc = could not find container \"3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd\": container with ID starting with 3e65323273b53e69a191561d363157fe4069f114f38cbddcf5b62c2b1af760dd not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.916108 4637 scope.go:117] "RemoveContainer" containerID="ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.917416 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332\": container with ID starting with ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332 not found: ID does not exist" containerID="ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.917470 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332"} err="failed to get container status \"ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332\": rpc error: code = NotFound desc = could not find container \"ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332\": container with ID starting with ef787a6bbb86388bb4d01512a1107830712ebc4036a4ca5e486a212a40ff0332 not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.917509 4637 scope.go:117] "RemoveContainer" containerID="97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.917953 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b\": container with ID starting with 97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b not found: ID does not exist" containerID="97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.918012 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b"} err="failed to get container status \"97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b\": rpc error: code = NotFound desc = could not find container \"97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b\": container with ID starting with 97096ba14b798a947e68bf759fe9092e20b0665e022e922daa58204a42ac263b not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.918035 4637 scope.go:117] "RemoveContainer" containerID="c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.919883 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.931324 4637 scope.go:117] "RemoveContainer" containerID="254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.934899 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.934923 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8023e8dc-e1e9-48d5-b1de-005d6e38e174-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.934990 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9cfx\" (UniqueName: \"kubernetes.io/projected/f9bf0fdb-f832-4c97-a1e4-74aace880d56-kube-api-access-l9cfx\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.949065 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9bf0fdb-f832-4c97-a1e4-74aace880d56" (UID: "f9bf0fdb-f832-4c97-a1e4-74aace880d56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.949607 4637 scope.go:117] "RemoveContainer" containerID="ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.964961 4637 scope.go:117] "RemoveContainer" containerID="c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.965621 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a\": container with ID starting with c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a not found: ID does not exist" containerID="c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.965678 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a"} err="failed to get container status \"c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a\": rpc error: code = NotFound desc = could not find container \"c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a\": container with ID starting with c1c655a98a831a7af713d70e74499ca2ceb8c2be7baabf923b674c44e6257e7a not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.965707 4637 scope.go:117] "RemoveContainer" containerID="254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.966874 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10\": container with ID starting with 254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10 not found: ID does not exist" containerID="254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.966955 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10"} err="failed to get container status \"254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10\": rpc error: code = NotFound desc = could not find container \"254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10\": container with ID starting with 254f9c67e8fcdaf47a430594ba8ea418949db181da741e09e4a695906d325e10 not found: ID does not exist" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.966991 4637 scope.go:117] "RemoveContainer" containerID="ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4" Dec 01 14:50:38 crc kubenswrapper[4637]: E1201 14:50:38.967587 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4\": container with ID starting with ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4 not found: ID does not exist" containerID="ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4" Dec 01 14:50:38 crc kubenswrapper[4637]: I1201 14:50:38.967622 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4"} err="failed to get container status \"ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4\": rpc error: code = NotFound desc = could not find container \"ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4\": container with ID starting with ae6e0bbd2d62324bdf90826b89f1899e04f100383130ec2bb8c8d144a60beaa4 not found: ID does not exist" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.007556 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.035863 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9bf0fdb-f832-4c97-a1e4-74aace880d56-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.148699 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.231993 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.231995 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.351745 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.351790 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.562701 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nhdr" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.563009 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nhdr" event={"ID":"f9bf0fdb-f832-4c97-a1e4-74aace880d56","Type":"ContainerDied","Data":"438a54379878c397d41234bf6ba6c50fbc365bd0014d935665c200f992fe5439"} Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.563509 4637 scope.go:117] "RemoveContainer" containerID="fb30f9c5047f9af0e12f923b556d095973f45c9bb228f6ee9640246c532cc441" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.568202 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" event={"ID":"1c222b01-860c-4973-9a37-7abcbfdf910f","Type":"ContainerStarted","Data":"c5c5db6e342f866540d209cf8bbd6cdfc8d614f6c8c87d6c14f362334d3a789e"} Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.568249 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" event={"ID":"1c222b01-860c-4973-9a37-7abcbfdf910f","Type":"ContainerStarted","Data":"c20c724823e460d3ba97785fbfae6ec8e5f5037e6dd2b5381f83f1b424e0f7a2"} Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.569276 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.574667 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.579963 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zct" event={"ID":"8023e8dc-e1e9-48d5-b1de-005d6e38e174","Type":"ContainerDied","Data":"6d3865fbc3abd53b8dfe6067b9a5a7ff3324ca6f0c821bd9ec5fde29e070c8c1"} Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.580113 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zct" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.591509 4637 scope.go:117] "RemoveContainer" containerID="0f0fedbac598256694fedf8e82d761c1967f7598456bcc80bb8d0dbf56856c6f" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.604345 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8kwzk" podStartSLOduration=2.604322138 podStartE2EDuration="2.604322138s" podCreationTimestamp="2025-12-01 14:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:50:39.590614036 +0000 UTC m=+290.108322864" watchObservedRunningTime="2025-12-01 14:50:39.604322138 +0000 UTC m=+290.122030966" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.616590 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nhdr"] Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.617358 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6nhdr"] Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.632428 4637 scope.go:117] "RemoveContainer" containerID="ae5259ad1e87c79d7bec94a5d49a2dc1803410958965516de7f1feb40859a762" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.667614 4637 scope.go:117] "RemoveContainer" containerID="bab648da99f54d6442044d2d536974e817a278d439cf465e9761f8a4eb524140" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.669202 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2zct"] Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.676951 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l2zct"] Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.691725 4637 scope.go:117] "RemoveContainer" containerID="d119a5b001756a026683ecebc2394b8bbe3267f3bbf5fd8e3aa92ba228bd5e59" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.724199 4637 scope.go:117] "RemoveContainer" containerID="bd8c13bd93caf013b68bdf414589025d3e8487f43e6e998aea6648c0340a7e42" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.781745 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" path="/var/lib/kubelet/pods/3b76fb53-14a1-49f9-b120-a4b492ab70fc/volumes" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.782466 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" path="/var/lib/kubelet/pods/6e99277b-aa2d-4f8d-a2f9-aeb954080a27/volumes" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.783034 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" path="/var/lib/kubelet/pods/8023e8dc-e1e9-48d5-b1de-005d6e38e174/volumes" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.784101 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" path="/var/lib/kubelet/pods/85895600-b021-44c3-ac07-f6ccd4f40226/volumes" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.784691 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" path="/var/lib/kubelet/pods/90a8e718-fe2f-4e8f-acc6-bb25efde0385/volumes" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.785683 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" path="/var/lib/kubelet/pods/cf829738-1178-4c69-add1-22239dd6b4c9/volumes" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.786293 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" path="/var/lib/kubelet/pods/f9bf0fdb-f832-4c97-a1e4-74aace880d56/volumes" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.847289 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 14:50:39 crc kubenswrapper[4637]: I1201 14:50:39.968878 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 14:50:40 crc kubenswrapper[4637]: I1201 14:50:40.298743 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 14:50:40 crc kubenswrapper[4637]: I1201 14:50:40.316397 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 14:50:40 crc kubenswrapper[4637]: I1201 14:50:40.423553 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 14:50:46 crc kubenswrapper[4637]: I1201 14:50:46.333955 4637 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 14:50:46 crc kubenswrapper[4637]: I1201 14:50:46.334655 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d" gracePeriod=5 Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.486284 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.486945 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.533665 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.533732 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.533751 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.533780 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.533796 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.533812 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.533842 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.533978 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.534003 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.534082 4637 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.534103 4637 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.543580 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.634900 4637 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.635328 4637 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.635341 4637 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.653272 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.653345 4637 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d" exitCode=137 Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.653396 4637 scope.go:117] "RemoveContainer" containerID="48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.653519 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.677616 4637 scope.go:117] "RemoveContainer" containerID="48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d" Dec 01 14:50:51 crc kubenswrapper[4637]: E1201 14:50:51.678188 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d\": container with ID starting with 48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d not found: ID does not exist" containerID="48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.678222 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d"} err="failed to get container status \"48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d\": rpc error: code = NotFound desc = could not find container \"48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d\": container with ID starting with 48b43853e02d2c55375d0ef9f1f5b2af47d47ceb8e9afbe5c8e7e0facdf7ba5d not found: ID does not exist" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.778386 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.779273 4637 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.793175 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.793223 4637 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2ff1a834-a751-4991-8999-f8cd8816bc85" Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.796895 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 14:50:51 crc kubenswrapper[4637]: I1201 14:50:51.796989 4637 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2ff1a834-a751-4991-8999-f8cd8816bc85" Dec 01 14:51:23 crc kubenswrapper[4637]: I1201 14:51:23.700245 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8rphr"] Dec 01 14:51:23 crc kubenswrapper[4637]: I1201 14:51:23.701052 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" podUID="bb60680f-a87a-4086-b701-91f89a1d123f" containerName="controller-manager" containerID="cri-o://48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c" gracePeriod=30 Dec 01 14:51:23 crc kubenswrapper[4637]: I1201 14:51:23.882101 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl"] Dec 01 14:51:23 crc kubenswrapper[4637]: I1201 14:51:23.882345 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" podUID="f778c361-3570-4a96-b4d1-1ba163ce04b9" containerName="route-controller-manager" containerID="cri-o://74fe00aa2365b32a72af7de3e162f5a6307546f50111bd6a41df7921536972da" gracePeriod=30 Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.220653 4637 generic.go:334] "Generic (PLEG): container finished" podID="f778c361-3570-4a96-b4d1-1ba163ce04b9" containerID="74fe00aa2365b32a72af7de3e162f5a6307546f50111bd6a41df7921536972da" exitCode=0 Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.220979 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" event={"ID":"f778c361-3570-4a96-b4d1-1ba163ce04b9","Type":"ContainerDied","Data":"74fe00aa2365b32a72af7de3e162f5a6307546f50111bd6a41df7921536972da"} Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.221318 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.227193 4637 generic.go:334] "Generic (PLEG): container finished" podID="bb60680f-a87a-4086-b701-91f89a1d123f" containerID="48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c" exitCode=0 Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.227231 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" event={"ID":"bb60680f-a87a-4086-b701-91f89a1d123f","Type":"ContainerDied","Data":"48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c"} Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.227256 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" event={"ID":"bb60680f-a87a-4086-b701-91f89a1d123f","Type":"ContainerDied","Data":"3283de10eba341a11f31120531ee50fe92fc6d17ccadb54a6771b250702dd16a"} Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.227273 4637 scope.go:117] "RemoveContainer" containerID="48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.235733 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb60680f-a87a-4086-b701-91f89a1d123f-serving-cert\") pod \"bb60680f-a87a-4086-b701-91f89a1d123f\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.235798 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-proxy-ca-bundles\") pod \"bb60680f-a87a-4086-b701-91f89a1d123f\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.235820 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-config\") pod \"bb60680f-a87a-4086-b701-91f89a1d123f\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.235845 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-client-ca\") pod \"bb60680f-a87a-4086-b701-91f89a1d123f\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.235862 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64pj4\" (UniqueName: \"kubernetes.io/projected/bb60680f-a87a-4086-b701-91f89a1d123f-kube-api-access-64pj4\") pod \"bb60680f-a87a-4086-b701-91f89a1d123f\" (UID: \"bb60680f-a87a-4086-b701-91f89a1d123f\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.237592 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-config" (OuterVolumeSpecName: "config") pod "bb60680f-a87a-4086-b701-91f89a1d123f" (UID: "bb60680f-a87a-4086-b701-91f89a1d123f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.238372 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bb60680f-a87a-4086-b701-91f89a1d123f" (UID: "bb60680f-a87a-4086-b701-91f89a1d123f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.238624 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-client-ca" (OuterVolumeSpecName: "client-ca") pod "bb60680f-a87a-4086-b701-91f89a1d123f" (UID: "bb60680f-a87a-4086-b701-91f89a1d123f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.253084 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb60680f-a87a-4086-b701-91f89a1d123f-kube-api-access-64pj4" (OuterVolumeSpecName: "kube-api-access-64pj4") pod "bb60680f-a87a-4086-b701-91f89a1d123f" (UID: "bb60680f-a87a-4086-b701-91f89a1d123f"). InnerVolumeSpecName "kube-api-access-64pj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.274392 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb60680f-a87a-4086-b701-91f89a1d123f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bb60680f-a87a-4086-b701-91f89a1d123f" (UID: "bb60680f-a87a-4086-b701-91f89a1d123f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.286151 4637 scope.go:117] "RemoveContainer" containerID="48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.286488 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c\": container with ID starting with 48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c not found: ID does not exist" containerID="48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.286517 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c"} err="failed to get container status \"48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c\": rpc error: code = NotFound desc = could not find container \"48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c\": container with ID starting with 48f4113f704c56a03be71f12c01b88632d67bc3ad0e4364023190de000da396c not found: ID does not exist" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.298697 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-787665467f-9kctf"] Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.298923 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.298963 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.298981 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.298987 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.298994 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.298999 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299009 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299015 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299022 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299027 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299034 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299040 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299046 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299051 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299060 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299071 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299083 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299090 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299103 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299111 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299120 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb60680f-a87a-4086-b701-91f89a1d123f" containerName="controller-manager" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299125 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb60680f-a87a-4086-b701-91f89a1d123f" containerName="controller-manager" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299133 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299139 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299145 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299151 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" containerName="extract-content" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299158 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299164 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299171 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299176 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299185 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299190 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299199 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299204 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299212 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299217 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299226 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299231 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299239 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299245 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: E1201 14:51:24.299253 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299259 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" containerName="extract-utilities" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299362 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="85895600-b021-44c3-ac07-f6ccd4f40226" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299371 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a8e718-fe2f-4e8f-acc6-bb25efde0385" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299378 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b76fb53-14a1-49f9-b120-a4b492ab70fc" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299386 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e99277b-aa2d-4f8d-a2f9-aeb954080a27" containerName="marketplace-operator" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299396 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9bf0fdb-f832-4c97-a1e4-74aace880d56" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299404 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf829738-1178-4c69-add1-22239dd6b4c9" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299413 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="8023e8dc-e1e9-48d5-b1de-005d6e38e174" containerName="registry-server" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299421 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299426 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb60680f-a87a-4086-b701-91f89a1d123f" containerName="controller-manager" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.299773 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.319196 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787665467f-9kctf"] Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.340797 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-client-ca\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.340837 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-config\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.340872 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-proxy-ca-bundles\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.340894 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd915d5-c835-42e4-bca1-943fa35a0890-serving-cert\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.340953 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5hv\" (UniqueName: \"kubernetes.io/projected/bdd915d5-c835-42e4-bca1-943fa35a0890-kube-api-access-9q5hv\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.340993 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb60680f-a87a-4086-b701-91f89a1d123f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.341009 4637 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.341021 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.341031 4637 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb60680f-a87a-4086-b701-91f89a1d123f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.341042 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64pj4\" (UniqueName: \"kubernetes.io/projected/bb60680f-a87a-4086-b701-91f89a1d123f-kube-api-access-64pj4\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.353617 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.442544 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-client-ca\") pod \"f778c361-3570-4a96-b4d1-1ba163ce04b9\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.442618 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f778c361-3570-4a96-b4d1-1ba163ce04b9-serving-cert\") pod \"f778c361-3570-4a96-b4d1-1ba163ce04b9\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.442672 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljdzs\" (UniqueName: \"kubernetes.io/projected/f778c361-3570-4a96-b4d1-1ba163ce04b9-kube-api-access-ljdzs\") pod \"f778c361-3570-4a96-b4d1-1ba163ce04b9\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.442692 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-config\") pod \"f778c361-3570-4a96-b4d1-1ba163ce04b9\" (UID: \"f778c361-3570-4a96-b4d1-1ba163ce04b9\") " Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.442911 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q5hv\" (UniqueName: \"kubernetes.io/projected/bdd915d5-c835-42e4-bca1-943fa35a0890-kube-api-access-9q5hv\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.442971 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-client-ca\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.442994 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-config\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.443029 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-proxy-ca-bundles\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.443049 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd915d5-c835-42e4-bca1-943fa35a0890-serving-cert\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.443354 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-client-ca" (OuterVolumeSpecName: "client-ca") pod "f778c361-3570-4a96-b4d1-1ba163ce04b9" (UID: "f778c361-3570-4a96-b4d1-1ba163ce04b9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.443790 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-config" (OuterVolumeSpecName: "config") pod "f778c361-3570-4a96-b4d1-1ba163ce04b9" (UID: "f778c361-3570-4a96-b4d1-1ba163ce04b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.444728 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-client-ca\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.445134 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-config\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.446127 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-proxy-ca-bundles\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.447716 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f778c361-3570-4a96-b4d1-1ba163ce04b9-kube-api-access-ljdzs" (OuterVolumeSpecName: "kube-api-access-ljdzs") pod "f778c361-3570-4a96-b4d1-1ba163ce04b9" (UID: "f778c361-3570-4a96-b4d1-1ba163ce04b9"). InnerVolumeSpecName "kube-api-access-ljdzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.447906 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd915d5-c835-42e4-bca1-943fa35a0890-serving-cert\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.448438 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f778c361-3570-4a96-b4d1-1ba163ce04b9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f778c361-3570-4a96-b4d1-1ba163ce04b9" (UID: "f778c361-3570-4a96-b4d1-1ba163ce04b9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.457287 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q5hv\" (UniqueName: \"kubernetes.io/projected/bdd915d5-c835-42e4-bca1-943fa35a0890-kube-api-access-9q5hv\") pod \"controller-manager-787665467f-9kctf\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.544219 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f778c361-3570-4a96-b4d1-1ba163ce04b9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.544263 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljdzs\" (UniqueName: \"kubernetes.io/projected/f778c361-3570-4a96-b4d1-1ba163ce04b9-kube-api-access-ljdzs\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.544277 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.544288 4637 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f778c361-3570-4a96-b4d1-1ba163ce04b9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.644989 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:24 crc kubenswrapper[4637]: I1201 14:51:24.865063 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787665467f-9kctf"] Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.234762 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" event={"ID":"bdd915d5-c835-42e4-bca1-943fa35a0890","Type":"ContainerStarted","Data":"74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40"} Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.234807 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" event={"ID":"bdd915d5-c835-42e4-bca1-943fa35a0890","Type":"ContainerStarted","Data":"ac5f94c6f0c643d4a7c17fbc6be9447df6892c8a3cbd227a5f8b8d01aa48fbeb"} Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.234824 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.236444 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" event={"ID":"f778c361-3570-4a96-b4d1-1ba163ce04b9","Type":"ContainerDied","Data":"afa61061464a753b82e0217201a29867318adeef4c803dc6a61174cf77313bcb"} Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.236473 4637 scope.go:117] "RemoveContainer" containerID="74fe00aa2365b32a72af7de3e162f5a6307546f50111bd6a41df7921536972da" Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.236474 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl" Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.237816 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8rphr" Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.274178 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.280885 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" podStartSLOduration=1.280864878 podStartE2EDuration="1.280864878s" podCreationTimestamp="2025-12-01 14:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:51:25.279080328 +0000 UTC m=+335.796789156" watchObservedRunningTime="2025-12-01 14:51:25.280864878 +0000 UTC m=+335.798573706" Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.299019 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl"] Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.303438 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-htdcl"] Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.310287 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8rphr"] Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.310333 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8rphr"] Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.776946 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb60680f-a87a-4086-b701-91f89a1d123f" path="/var/lib/kubelet/pods/bb60680f-a87a-4086-b701-91f89a1d123f/volumes" Dec 01 14:51:25 crc kubenswrapper[4637]: I1201 14:51:25.777456 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f778c361-3570-4a96-b4d1-1ba163ce04b9" path="/var/lib/kubelet/pods/f778c361-3570-4a96-b4d1-1ba163ce04b9/volumes" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.093348 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw"] Dec 01 14:51:26 crc kubenswrapper[4637]: E1201 14:51:26.093676 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f778c361-3570-4a96-b4d1-1ba163ce04b9" containerName="route-controller-manager" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.093694 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f778c361-3570-4a96-b4d1-1ba163ce04b9" containerName="route-controller-manager" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.093832 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f778c361-3570-4a96-b4d1-1ba163ce04b9" containerName="route-controller-manager" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.094405 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.100341 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.100767 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.100989 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.101826 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.102163 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.102374 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.120314 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw"] Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.171842 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pc5f\" (UniqueName: \"kubernetes.io/projected/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-kube-api-access-9pc5f\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.171919 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-client-ca\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.172007 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-serving-cert\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.172083 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-config\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.272741 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-client-ca\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.272804 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-serving-cert\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.272844 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-config\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.272917 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pc5f\" (UniqueName: \"kubernetes.io/projected/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-kube-api-access-9pc5f\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.274535 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-client-ca\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.275819 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-config\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.286969 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-serving-cert\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.297826 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pc5f\" (UniqueName: \"kubernetes.io/projected/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-kube-api-access-9pc5f\") pod \"route-controller-manager-5695f464f5-tfkmw\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.412784 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:26 crc kubenswrapper[4637]: I1201 14:51:26.642906 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw"] Dec 01 14:51:27 crc kubenswrapper[4637]: I1201 14:51:27.251700 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" event={"ID":"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee","Type":"ContainerStarted","Data":"04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb"} Dec 01 14:51:27 crc kubenswrapper[4637]: I1201 14:51:27.251802 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" event={"ID":"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee","Type":"ContainerStarted","Data":"55ac9a6b164652022aeb2464ad1e535df6de7dd46c40c2cca828c8ed32e108f5"} Dec 01 14:51:27 crc kubenswrapper[4637]: I1201 14:51:27.251828 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:27 crc kubenswrapper[4637]: I1201 14:51:27.258000 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:51:27 crc kubenswrapper[4637]: I1201 14:51:27.275427 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" podStartSLOduration=3.275395372 podStartE2EDuration="3.275395372s" podCreationTimestamp="2025-12-01 14:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:51:27.272091033 +0000 UTC m=+337.789799861" watchObservedRunningTime="2025-12-01 14:51:27.275395372 +0000 UTC m=+337.793104210" Dec 01 14:51:45 crc kubenswrapper[4637]: I1201 14:51:45.613316 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:51:45 crc kubenswrapper[4637]: I1201 14:51:45.613823 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.147790 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2tgpf"] Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.149247 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.170533 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2tgpf"] Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.265530 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.265573 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.265611 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.265650 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-trusted-ca\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.265671 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9gc\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-kube-api-access-fr9gc\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.265692 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-bound-sa-token\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.265714 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-registry-tls\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.265785 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-registry-certificates\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.288211 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.366603 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-trusted-ca\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.367073 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr9gc\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-kube-api-access-fr9gc\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.367120 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-bound-sa-token\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.367165 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-registry-tls\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.367245 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-registry-certificates\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.367290 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.367329 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.368033 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.368577 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-registry-certificates\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.369663 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-trusted-ca\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.372850 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-registry-tls\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.372918 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.385994 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr9gc\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-kube-api-access-fr9gc\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.386522 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09-bound-sa-token\") pod \"image-registry-66df7c8f76-2tgpf\" (UID: \"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09\") " pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.465156 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:00 crc kubenswrapper[4637]: I1201 14:52:00.883111 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2tgpf"] Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.439787 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" event={"ID":"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09","Type":"ContainerStarted","Data":"98797cd34aec11e62712cd9a90effb36d0c5a892b0bc5885dec1fe6c2ee57d74"} Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.439834 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" event={"ID":"4b78a9e5-d61d-43d6-bb22-bcaab0ec8d09","Type":"ContainerStarted","Data":"0c3717bcbc518899ee180fafbd1c1b0687713153f08f2960ed50d76940a5a402"} Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.439950 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.459805 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" podStartSLOduration=1.459783122 podStartE2EDuration="1.459783122s" podCreationTimestamp="2025-12-01 14:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:52:01.457327625 +0000 UTC m=+371.975036463" watchObservedRunningTime="2025-12-01 14:52:01.459783122 +0000 UTC m=+371.977491950" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.747538 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-df4gw"] Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.748846 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.750798 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.763318 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-df4gw"] Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.785416 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a407577-a89c-4bd1-9e97-0140f2ea2c40-catalog-content\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.785497 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a407577-a89c-4bd1-9e97-0140f2ea2c40-utilities\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.785543 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mnsk\" (UniqueName: \"kubernetes.io/projected/2a407577-a89c-4bd1-9e97-0140f2ea2c40-kube-api-access-2mnsk\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.886870 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a407577-a89c-4bd1-9e97-0140f2ea2c40-utilities\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.886980 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mnsk\" (UniqueName: \"kubernetes.io/projected/2a407577-a89c-4bd1-9e97-0140f2ea2c40-kube-api-access-2mnsk\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.887391 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a407577-a89c-4bd1-9e97-0140f2ea2c40-catalog-content\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.887405 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a407577-a89c-4bd1-9e97-0140f2ea2c40-utilities\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.887699 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a407577-a89c-4bd1-9e97-0140f2ea2c40-catalog-content\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.911961 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mnsk\" (UniqueName: \"kubernetes.io/projected/2a407577-a89c-4bd1-9e97-0140f2ea2c40-kube-api-access-2mnsk\") pod \"redhat-marketplace-df4gw\" (UID: \"2a407577-a89c-4bd1-9e97-0140f2ea2c40\") " pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.942310 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gbh9m"] Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.943528 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.950535 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.963107 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gbh9m"] Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.988144 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjcb\" (UniqueName: \"kubernetes.io/projected/8331a591-a7d8-4c36-ae47-f973a8468986-kube-api-access-ngjcb\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.988441 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-catalog-content\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:01 crc kubenswrapper[4637]: I1201 14:52:01.988550 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-utilities\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.064528 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.090002 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjcb\" (UniqueName: \"kubernetes.io/projected/8331a591-a7d8-4c36-ae47-f973a8468986-kube-api-access-ngjcb\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.090060 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-catalog-content\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.090090 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-utilities\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.090590 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-catalog-content\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.090680 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-utilities\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.110411 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjcb\" (UniqueName: \"kubernetes.io/projected/8331a591-a7d8-4c36-ae47-f973a8468986-kube-api-access-ngjcb\") pod \"community-operators-gbh9m\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.260459 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.452620 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-df4gw"] Dec 01 14:52:02 crc kubenswrapper[4637]: W1201 14:52:02.459774 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a407577_a89c_4bd1_9e97_0140f2ea2c40.slice/crio-25a10a3d442eef181d2d8f6b8958df2380539ed07425ae73c57183a290fe0cb2 WatchSource:0}: Error finding container 25a10a3d442eef181d2d8f6b8958df2380539ed07425ae73c57183a290fe0cb2: Status 404 returned error can't find the container with id 25a10a3d442eef181d2d8f6b8958df2380539ed07425ae73c57183a290fe0cb2 Dec 01 14:52:02 crc kubenswrapper[4637]: I1201 14:52:02.629569 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gbh9m"] Dec 01 14:52:02 crc kubenswrapper[4637]: W1201 14:52:02.694592 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8331a591_a7d8_4c36_ae47_f973a8468986.slice/crio-1bde9b1d73b81013b4f9fa92f0e952962fce973bb0b7bd2cd964f70f0d1f196b WatchSource:0}: Error finding container 1bde9b1d73b81013b4f9fa92f0e952962fce973bb0b7bd2cd964f70f0d1f196b: Status 404 returned error can't find the container with id 1bde9b1d73b81013b4f9fa92f0e952962fce973bb0b7bd2cd964f70f0d1f196b Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.473027 4637 generic.go:334] "Generic (PLEG): container finished" podID="8331a591-a7d8-4c36-ae47-f973a8468986" containerID="71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f" exitCode=0 Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.473079 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbh9m" event={"ID":"8331a591-a7d8-4c36-ae47-f973a8468986","Type":"ContainerDied","Data":"71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f"} Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.473529 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbh9m" event={"ID":"8331a591-a7d8-4c36-ae47-f973a8468986","Type":"ContainerStarted","Data":"1bde9b1d73b81013b4f9fa92f0e952962fce973bb0b7bd2cd964f70f0d1f196b"} Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.484669 4637 generic.go:334] "Generic (PLEG): container finished" podID="2a407577-a89c-4bd1-9e97-0140f2ea2c40" containerID="c9969cff494ae52e20e526fca6e1b5b0391a21b2ae22acc43aa68117776b8eec" exitCode=0 Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.484788 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df4gw" event={"ID":"2a407577-a89c-4bd1-9e97-0140f2ea2c40","Type":"ContainerDied","Data":"c9969cff494ae52e20e526fca6e1b5b0391a21b2ae22acc43aa68117776b8eec"} Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.484872 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df4gw" event={"ID":"2a407577-a89c-4bd1-9e97-0140f2ea2c40","Type":"ContainerStarted","Data":"25a10a3d442eef181d2d8f6b8958df2380539ed07425ae73c57183a290fe0cb2"} Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.676946 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-787665467f-9kctf"] Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.677391 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" podUID="bdd915d5-c835-42e4-bca1-943fa35a0890" containerName="controller-manager" containerID="cri-o://74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40" gracePeriod=30 Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.719424 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw"] Dec 01 14:52:03 crc kubenswrapper[4637]: I1201 14:52:03.719624 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" podUID="daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" containerName="route-controller-manager" containerID="cri-o://04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb" gracePeriod=30 Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.080531 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.117140 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-client-ca\") pod \"bdd915d5-c835-42e4-bca1-943fa35a0890\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.117184 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd915d5-c835-42e4-bca1-943fa35a0890-serving-cert\") pod \"bdd915d5-c835-42e4-bca1-943fa35a0890\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.117287 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-config\") pod \"bdd915d5-c835-42e4-bca1-943fa35a0890\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.117328 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-proxy-ca-bundles\") pod \"bdd915d5-c835-42e4-bca1-943fa35a0890\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.117371 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q5hv\" (UniqueName: \"kubernetes.io/projected/bdd915d5-c835-42e4-bca1-943fa35a0890-kube-api-access-9q5hv\") pod \"bdd915d5-c835-42e4-bca1-943fa35a0890\" (UID: \"bdd915d5-c835-42e4-bca1-943fa35a0890\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.119327 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-client-ca" (OuterVolumeSpecName: "client-ca") pod "bdd915d5-c835-42e4-bca1-943fa35a0890" (UID: "bdd915d5-c835-42e4-bca1-943fa35a0890"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.119364 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bdd915d5-c835-42e4-bca1-943fa35a0890" (UID: "bdd915d5-c835-42e4-bca1-943fa35a0890"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.120074 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-config" (OuterVolumeSpecName: "config") pod "bdd915d5-c835-42e4-bca1-943fa35a0890" (UID: "bdd915d5-c835-42e4-bca1-943fa35a0890"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.123134 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd915d5-c835-42e4-bca1-943fa35a0890-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bdd915d5-c835-42e4-bca1-943fa35a0890" (UID: "bdd915d5-c835-42e4-bca1-943fa35a0890"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.131903 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd915d5-c835-42e4-bca1-943fa35a0890-kube-api-access-9q5hv" (OuterVolumeSpecName: "kube-api-access-9q5hv") pod "bdd915d5-c835-42e4-bca1-943fa35a0890" (UID: "bdd915d5-c835-42e4-bca1-943fa35a0890"). InnerVolumeSpecName "kube-api-access-9q5hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.146226 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ldz5j"] Dec 01 14:52:04 crc kubenswrapper[4637]: E1201 14:52:04.146446 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd915d5-c835-42e4-bca1-943fa35a0890" containerName="controller-manager" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.146462 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd915d5-c835-42e4-bca1-943fa35a0890" containerName="controller-manager" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.146586 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd915d5-c835-42e4-bca1-943fa35a0890" containerName="controller-manager" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.147355 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.154924 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.161272 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldz5j"] Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.175883 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.218816 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-config\") pod \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.218887 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-client-ca\") pod \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.218980 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-serving-cert\") pod \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219007 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pc5f\" (UniqueName: \"kubernetes.io/projected/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-kube-api-access-9pc5f\") pod \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\" (UID: \"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee\") " Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219376 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-catalog-content\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219409 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-utilities\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219523 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q972p\" (UniqueName: \"kubernetes.io/projected/5efdb983-9c61-4647-9f5b-aad26de5b2d6-kube-api-access-q972p\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219600 4637 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219619 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q5hv\" (UniqueName: \"kubernetes.io/projected/bdd915d5-c835-42e4-bca1-943fa35a0890-kube-api-access-9q5hv\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219631 4637 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219642 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd915d5-c835-42e4-bca1-943fa35a0890-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219642 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" (UID: "daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.219652 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd915d5-c835-42e4-bca1-943fa35a0890-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.220372 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-config" (OuterVolumeSpecName: "config") pod "daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" (UID: "daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.222755 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" (UID: "daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.222922 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-kube-api-access-9pc5f" (OuterVolumeSpecName: "kube-api-access-9pc5f") pod "daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" (UID: "daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee"). InnerVolumeSpecName "kube-api-access-9pc5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.321242 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-catalog-content\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.321378 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-utilities\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.321493 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q972p\" (UniqueName: \"kubernetes.io/projected/5efdb983-9c61-4647-9f5b-aad26de5b2d6-kube-api-access-q972p\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.321637 4637 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.321716 4637 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.321782 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pc5f\" (UniqueName: \"kubernetes.io/projected/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-kube-api-access-9pc5f\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.321847 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.322659 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-catalog-content\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.323055 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-utilities\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.343688 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7hh95"] Dec 01 14:52:04 crc kubenswrapper[4637]: E1201 14:52:04.343913 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" containerName="route-controller-manager" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.343941 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" containerName="route-controller-manager" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.344038 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" containerName="route-controller-manager" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.344809 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.344976 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q972p\" (UniqueName: \"kubernetes.io/projected/5efdb983-9c61-4647-9f5b-aad26de5b2d6-kube-api-access-q972p\") pod \"redhat-operators-ldz5j\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.349005 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.361550 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hh95"] Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.422750 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-utilities\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.422813 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-catalog-content\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.422837 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nd6\" (UniqueName: \"kubernetes.io/projected/b4e2ec35-5514-4bff-9b36-d8d58563ca44-kube-api-access-c9nd6\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.468587 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.495115 4637 generic.go:334] "Generic (PLEG): container finished" podID="bdd915d5-c835-42e4-bca1-943fa35a0890" containerID="74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40" exitCode=0 Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.495178 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" event={"ID":"bdd915d5-c835-42e4-bca1-943fa35a0890","Type":"ContainerDied","Data":"74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40"} Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.495208 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" event={"ID":"bdd915d5-c835-42e4-bca1-943fa35a0890","Type":"ContainerDied","Data":"ac5f94c6f0c643d4a7c17fbc6be9447df6892c8a3cbd227a5f8b8d01aa48fbeb"} Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.495231 4637 scope.go:117] "RemoveContainer" containerID="74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.495356 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787665467f-9kctf" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.506000 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbh9m" event={"ID":"8331a591-a7d8-4c36-ae47-f973a8468986","Type":"ContainerStarted","Data":"1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1"} Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.516753 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df4gw" event={"ID":"2a407577-a89c-4bd1-9e97-0140f2ea2c40","Type":"ContainerStarted","Data":"94216d006893bd00b19f6139d849e28c0ea5b7116a1360eda9097405396cd0ce"} Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.527649 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-utilities\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.527741 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-catalog-content\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.527768 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9nd6\" (UniqueName: \"kubernetes.io/projected/b4e2ec35-5514-4bff-9b36-d8d58563ca44-kube-api-access-c9nd6\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.528661 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-utilities\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.529259 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-catalog-content\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.532191 4637 scope.go:117] "RemoveContainer" containerID="74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40" Dec 01 14:52:04 crc kubenswrapper[4637]: E1201 14:52:04.532802 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40\": container with ID starting with 74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40 not found: ID does not exist" containerID="74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.532835 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40"} err="failed to get container status \"74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40\": rpc error: code = NotFound desc = could not find container \"74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40\": container with ID starting with 74342a4f7bf5575528faed38b3cbf968e92491c6047414cb2eea702fefe71d40 not found: ID does not exist" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.535025 4637 generic.go:334] "Generic (PLEG): container finished" podID="daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" containerID="04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb" exitCode=0 Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.535066 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" event={"ID":"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee","Type":"ContainerDied","Data":"04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb"} Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.535093 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" event={"ID":"daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee","Type":"ContainerDied","Data":"55ac9a6b164652022aeb2464ad1e535df6de7dd46c40c2cca828c8ed32e108f5"} Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.535106 4637 scope.go:117] "RemoveContainer" containerID="04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.535159 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.548796 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9nd6\" (UniqueName: \"kubernetes.io/projected/b4e2ec35-5514-4bff-9b36-d8d58563ca44-kube-api-access-c9nd6\") pod \"certified-operators-7hh95\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.568840 4637 scope.go:117] "RemoveContainer" containerID="04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb" Dec 01 14:52:04 crc kubenswrapper[4637]: E1201 14:52:04.575687 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb\": container with ID starting with 04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb not found: ID does not exist" containerID="04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.575729 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb"} err="failed to get container status \"04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb\": rpc error: code = NotFound desc = could not find container \"04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb\": container with ID starting with 04e71ca3534a0f27a4890ce949abfbd671675733d0303dc174eb624d67e20ddb not found: ID does not exist" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.581011 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-787665467f-9kctf"] Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.586720 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-787665467f-9kctf"] Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.624181 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw"] Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.626954 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5695f464f5-tfkmw"] Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.675786 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:04 crc kubenswrapper[4637]: I1201 14:52:04.919703 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldz5j"] Dec 01 14:52:04 crc kubenswrapper[4637]: W1201 14:52:04.927574 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5efdb983_9c61_4647_9f5b_aad26de5b2d6.slice/crio-22be1858d973634af1baf43d99eb803ad39fbb906cc551568f00187532f87db1 WatchSource:0}: Error finding container 22be1858d973634af1baf43d99eb803ad39fbb906cc551568f00187532f87db1: Status 404 returned error can't find the container with id 22be1858d973634af1baf43d99eb803ad39fbb906cc551568f00187532f87db1 Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.069785 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hh95"] Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.122996 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2"] Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.123878 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.128716 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7db455f998-8rjdx"] Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.129362 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.132684 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.133038 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.133961 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.135072 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.135986 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.136666 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.138591 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.138715 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.138824 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7db455f998-8rjdx"] Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.138869 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2"] Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.138844 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.138954 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.139127 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.139246 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.140794 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-config\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.140833 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnzf\" (UniqueName: \"kubernetes.io/projected/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-kube-api-access-mtnzf\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.140877 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-serving-cert\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.140908 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-client-ca\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.143391 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 14:52:05 crc kubenswrapper[4637]: E1201 14:52:05.150462 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5efdb983_9c61_4647_9f5b_aad26de5b2d6.slice/crio-conmon-b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8.scope\": RecentStats: unable to find data in memory cache]" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242382 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-config\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242448 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-client-ca\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242477 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-client-ca\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242508 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54691df8-8d41-4dd3-8e0a-8ec039867bae-serving-cert\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242536 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-config\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242555 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-proxy-ca-bundles\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242728 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtnzf\" (UniqueName: \"kubernetes.io/projected/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-kube-api-access-mtnzf\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242908 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-serving-cert\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.242969 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqm4\" (UniqueName: \"kubernetes.io/projected/54691df8-8d41-4dd3-8e0a-8ec039867bae-kube-api-access-zbqm4\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.244180 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-client-ca\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.245143 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-config\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.247913 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-serving-cert\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.257552 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtnzf\" (UniqueName: \"kubernetes.io/projected/6ae24bc7-ee40-4c74-80dc-554b59e33ca3-kube-api-access-mtnzf\") pod \"route-controller-manager-7dbcb65d46-554g2\" (UID: \"6ae24bc7-ee40-4c74-80dc-554b59e33ca3\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.293618 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.349081 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54691df8-8d41-4dd3-8e0a-8ec039867bae-serving-cert\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.349166 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-proxy-ca-bundles\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.349271 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbqm4\" (UniqueName: \"kubernetes.io/projected/54691df8-8d41-4dd3-8e0a-8ec039867bae-kube-api-access-zbqm4\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.349316 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-config\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.349363 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-client-ca\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.350804 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-proxy-ca-bundles\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.351459 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-config\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.352903 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54691df8-8d41-4dd3-8e0a-8ec039867bae-serving-cert\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.353484 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54691df8-8d41-4dd3-8e0a-8ec039867bae-client-ca\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.366410 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbqm4\" (UniqueName: \"kubernetes.io/projected/54691df8-8d41-4dd3-8e0a-8ec039867bae-kube-api-access-zbqm4\") pod \"controller-manager-7db455f998-8rjdx\" (UID: \"54691df8-8d41-4dd3-8e0a-8ec039867bae\") " pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.542608 4637 generic.go:334] "Generic (PLEG): container finished" podID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerID="b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8" exitCode=0 Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.542718 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldz5j" event={"ID":"5efdb983-9c61-4647-9f5b-aad26de5b2d6","Type":"ContainerDied","Data":"b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8"} Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.543094 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldz5j" event={"ID":"5efdb983-9c61-4647-9f5b-aad26de5b2d6","Type":"ContainerStarted","Data":"22be1858d973634af1baf43d99eb803ad39fbb906cc551568f00187532f87db1"} Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.550493 4637 generic.go:334] "Generic (PLEG): container finished" podID="2a407577-a89c-4bd1-9e97-0140f2ea2c40" containerID="94216d006893bd00b19f6139d849e28c0ea5b7116a1360eda9097405396cd0ce" exitCode=0 Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.550566 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df4gw" event={"ID":"2a407577-a89c-4bd1-9e97-0140f2ea2c40","Type":"ContainerDied","Data":"94216d006893bd00b19f6139d849e28c0ea5b7116a1360eda9097405396cd0ce"} Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.556057 4637 generic.go:334] "Generic (PLEG): container finished" podID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerID="51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4" exitCode=0 Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.556140 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hh95" event={"ID":"b4e2ec35-5514-4bff-9b36-d8d58563ca44","Type":"ContainerDied","Data":"51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4"} Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.556170 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hh95" event={"ID":"b4e2ec35-5514-4bff-9b36-d8d58563ca44","Type":"ContainerStarted","Data":"dcbfc42069a834099214fe2943b05cc6e9f2a889e53c5cc23ac2a8caad442166"} Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.560862 4637 generic.go:334] "Generic (PLEG): container finished" podID="8331a591-a7d8-4c36-ae47-f973a8468986" containerID="1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1" exitCode=0 Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.560905 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbh9m" event={"ID":"8331a591-a7d8-4c36-ae47-f973a8468986","Type":"ContainerDied","Data":"1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1"} Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.620415 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.681199 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2"] Dec 01 14:52:05 crc kubenswrapper[4637]: W1201 14:52:05.693306 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae24bc7_ee40_4c74_80dc_554b59e33ca3.slice/crio-e32212c5b6ff2bb6a4223dd1c2a86d02a776ba81894b807a19be8542c44ef9db WatchSource:0}: Error finding container e32212c5b6ff2bb6a4223dd1c2a86d02a776ba81894b807a19be8542c44ef9db: Status 404 returned error can't find the container with id e32212c5b6ff2bb6a4223dd1c2a86d02a776ba81894b807a19be8542c44ef9db Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.780256 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd915d5-c835-42e4-bca1-943fa35a0890" path="/var/lib/kubelet/pods/bdd915d5-c835-42e4-bca1-943fa35a0890/volumes" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.781950 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee" path="/var/lib/kubelet/pods/daf6b22c-aa18-4c3f-9560-d4d3bb7dc9ee/volumes" Dec 01 14:52:05 crc kubenswrapper[4637]: I1201 14:52:05.818293 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7db455f998-8rjdx"] Dec 01 14:52:05 crc kubenswrapper[4637]: W1201 14:52:05.825511 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54691df8_8d41_4dd3_8e0a_8ec039867bae.slice/crio-9a8810c20b2dda0364638eb1fb5d0987d19ad09f1757915e446d788ba15e90b6 WatchSource:0}: Error finding container 9a8810c20b2dda0364638eb1fb5d0987d19ad09f1757915e446d788ba15e90b6: Status 404 returned error can't find the container with id 9a8810c20b2dda0364638eb1fb5d0987d19ad09f1757915e446d788ba15e90b6 Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.575374 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-df4gw" event={"ID":"2a407577-a89c-4bd1-9e97-0140f2ea2c40","Type":"ContainerStarted","Data":"36eac526c3a0a9fe55766d0180c4e0e4d5742eab558197e87c2ce15b13bca683"} Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.583972 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" event={"ID":"54691df8-8d41-4dd3-8e0a-8ec039867bae","Type":"ContainerStarted","Data":"6a5967d9ffe96e1887fa08e57d9b42a17e5be934edcc45eca09254b56e8b51e4"} Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.584045 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" event={"ID":"54691df8-8d41-4dd3-8e0a-8ec039867bae","Type":"ContainerStarted","Data":"9a8810c20b2dda0364638eb1fb5d0987d19ad09f1757915e446d788ba15e90b6"} Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.584265 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.590487 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.598198 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hh95" event={"ID":"b4e2ec35-5514-4bff-9b36-d8d58563ca44","Type":"ContainerStarted","Data":"ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6"} Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.605131 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" event={"ID":"6ae24bc7-ee40-4c74-80dc-554b59e33ca3","Type":"ContainerStarted","Data":"8bedfcb758c394cd0132ae82d40d5cc3b06509e00304e0ef4602c694f8d7c244"} Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.605184 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" event={"ID":"6ae24bc7-ee40-4c74-80dc-554b59e33ca3","Type":"ContainerStarted","Data":"e32212c5b6ff2bb6a4223dd1c2a86d02a776ba81894b807a19be8542c44ef9db"} Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.606390 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.611166 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.623681 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-df4gw" podStartSLOduration=2.912968804 podStartE2EDuration="5.623657611s" podCreationTimestamp="2025-12-01 14:52:01 +0000 UTC" firstStartedPulling="2025-12-01 14:52:03.487354012 +0000 UTC m=+374.005062840" lastFinishedPulling="2025-12-01 14:52:06.198042819 +0000 UTC m=+376.715751647" observedRunningTime="2025-12-01 14:52:06.622121507 +0000 UTC m=+377.139830335" watchObservedRunningTime="2025-12-01 14:52:06.623657611 +0000 UTC m=+377.141366439" Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.717161 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dbcb65d46-554g2" podStartSLOduration=3.71713428 podStartE2EDuration="3.71713428s" podCreationTimestamp="2025-12-01 14:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:52:06.701426942 +0000 UTC m=+377.219135780" watchObservedRunningTime="2025-12-01 14:52:06.71713428 +0000 UTC m=+377.234843108" Dec 01 14:52:06 crc kubenswrapper[4637]: I1201 14:52:06.720518 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7db455f998-8rjdx" podStartSLOduration=3.720497716 podStartE2EDuration="3.720497716s" podCreationTimestamp="2025-12-01 14:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:52:06.67864378 +0000 UTC m=+377.196352608" watchObservedRunningTime="2025-12-01 14:52:06.720497716 +0000 UTC m=+377.238206544" Dec 01 14:52:07 crc kubenswrapper[4637]: I1201 14:52:07.615246 4637 generic.go:334] "Generic (PLEG): container finished" podID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerID="ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6" exitCode=0 Dec 01 14:52:07 crc kubenswrapper[4637]: I1201 14:52:07.615344 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hh95" event={"ID":"b4e2ec35-5514-4bff-9b36-d8d58563ca44","Type":"ContainerDied","Data":"ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6"} Dec 01 14:52:07 crc kubenswrapper[4637]: I1201 14:52:07.619763 4637 generic.go:334] "Generic (PLEG): container finished" podID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerID="d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375" exitCode=0 Dec 01 14:52:07 crc kubenswrapper[4637]: I1201 14:52:07.619843 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldz5j" event={"ID":"5efdb983-9c61-4647-9f5b-aad26de5b2d6","Type":"ContainerDied","Data":"d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375"} Dec 01 14:52:07 crc kubenswrapper[4637]: I1201 14:52:07.627565 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbh9m" event={"ID":"8331a591-a7d8-4c36-ae47-f973a8468986","Type":"ContainerStarted","Data":"42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b"} Dec 01 14:52:07 crc kubenswrapper[4637]: I1201 14:52:07.664983 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gbh9m" podStartSLOduration=3.613381849 podStartE2EDuration="6.664959402s" podCreationTimestamp="2025-12-01 14:52:01 +0000 UTC" firstStartedPulling="2025-12-01 14:52:03.48214035 +0000 UTC m=+373.999849178" lastFinishedPulling="2025-12-01 14:52:06.533717903 +0000 UTC m=+377.051426731" observedRunningTime="2025-12-01 14:52:07.661570945 +0000 UTC m=+378.179279793" watchObservedRunningTime="2025-12-01 14:52:07.664959402 +0000 UTC m=+378.182668230" Dec 01 14:52:08 crc kubenswrapper[4637]: I1201 14:52:08.634792 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hh95" event={"ID":"b4e2ec35-5514-4bff-9b36-d8d58563ca44","Type":"ContainerStarted","Data":"63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb"} Dec 01 14:52:08 crc kubenswrapper[4637]: I1201 14:52:08.638111 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldz5j" event={"ID":"5efdb983-9c61-4647-9f5b-aad26de5b2d6","Type":"ContainerStarted","Data":"a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e"} Dec 01 14:52:08 crc kubenswrapper[4637]: I1201 14:52:08.655205 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7hh95" podStartSLOduration=2.166468588 podStartE2EDuration="4.655173374s" podCreationTimestamp="2025-12-01 14:52:04 +0000 UTC" firstStartedPulling="2025-12-01 14:52:05.561958918 +0000 UTC m=+376.079667756" lastFinishedPulling="2025-12-01 14:52:08.050663714 +0000 UTC m=+378.568372542" observedRunningTime="2025-12-01 14:52:08.654194205 +0000 UTC m=+379.171903033" watchObservedRunningTime="2025-12-01 14:52:08.655173374 +0000 UTC m=+379.172882202" Dec 01 14:52:08 crc kubenswrapper[4637]: I1201 14:52:08.682907 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ldz5j" podStartSLOduration=1.804317877 podStartE2EDuration="4.682886575s" podCreationTimestamp="2025-12-01 14:52:04 +0000 UTC" firstStartedPulling="2025-12-01 14:52:05.544899941 +0000 UTC m=+376.062608769" lastFinishedPulling="2025-12-01 14:52:08.423468639 +0000 UTC m=+378.941177467" observedRunningTime="2025-12-01 14:52:08.67360568 +0000 UTC m=+379.191314508" watchObservedRunningTime="2025-12-01 14:52:08.682886575 +0000 UTC m=+379.200595403" Dec 01 14:52:12 crc kubenswrapper[4637]: I1201 14:52:12.065771 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:12 crc kubenswrapper[4637]: I1201 14:52:12.066111 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:12 crc kubenswrapper[4637]: I1201 14:52:12.109902 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:12 crc kubenswrapper[4637]: I1201 14:52:12.260946 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:12 crc kubenswrapper[4637]: I1201 14:52:12.261006 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:12 crc kubenswrapper[4637]: I1201 14:52:12.301854 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:12 crc kubenswrapper[4637]: I1201 14:52:12.739117 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gbh9m" Dec 01 14:52:12 crc kubenswrapper[4637]: I1201 14:52:12.744055 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-df4gw" Dec 01 14:52:14 crc kubenswrapper[4637]: I1201 14:52:14.470007 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:14.470344 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:14.519695 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:14.677414 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:14.677458 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:14.706817 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:14.719829 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:15.613896 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:15.614012 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:52:18 crc kubenswrapper[4637]: I1201 14:52:15.739471 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7hh95" Dec 01 14:52:20 crc kubenswrapper[4637]: I1201 14:52:20.473452 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2tgpf" Dec 01 14:52:20 crc kubenswrapper[4637]: I1201 14:52:20.564454 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8b86l"] Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.617343 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.617861 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.617956 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.618758 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4885cfa0f150a63ffa8391a1ed4c896a43e4cdd3b372dc06af2d7e94293fae9c"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.618827 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://4885cfa0f150a63ffa8391a1ed4c896a43e4cdd3b372dc06af2d7e94293fae9c" gracePeriod=600 Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.622249 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" podUID="f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" containerName="registry" containerID="cri-o://32a82240aab19455b47b27b284858d8e4f8e8a85c7fcc445c5474de613799d1b" gracePeriod=30 Dec 01 14:52:45 crc kubenswrapper[4637]: W1201 14:52:45.634655 4637 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf242fd70_f7ff_48a3_b7ed_2a5d2082ba5b.slice/crio-32a82240aab19455b47b27b284858d8e4f8e8a85c7fcc445c5474de613799d1b.scope/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf242fd70_f7ff_48a3_b7ed_2a5d2082ba5b.slice/crio-32a82240aab19455b47b27b284858d8e4f8e8a85c7fcc445c5474de613799d1b.scope/cpuset.cpus.effective: no such device Dec 01 14:52:45 crc kubenswrapper[4637]: E1201 14:52:45.663279 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2db6c86b_ff8c_4746_9c91_7dac0498c0b9.slice/crio-4885cfa0f150a63ffa8391a1ed4c896a43e4cdd3b372dc06af2d7e94293fae9c.scope\": RecentStats: unable to find data in memory cache]" Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.839776 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="4885cfa0f150a63ffa8391a1ed4c896a43e4cdd3b372dc06af2d7e94293fae9c" exitCode=0 Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.839859 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"4885cfa0f150a63ffa8391a1ed4c896a43e4cdd3b372dc06af2d7e94293fae9c"} Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.840285 4637 scope.go:117] "RemoveContainer" containerID="8aa96b8d7d06d8523b4433d6feffb35159d51dde9e9c76c624fe1a69bed0f4a8" Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.842754 4637 generic.go:334] "Generic (PLEG): container finished" podID="f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" containerID="32a82240aab19455b47b27b284858d8e4f8e8a85c7fcc445c5474de613799d1b" exitCode=0 Dec 01 14:52:45 crc kubenswrapper[4637]: I1201 14:52:45.842788 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" event={"ID":"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b","Type":"ContainerDied","Data":"32a82240aab19455b47b27b284858d8e4f8e8a85c7fcc445c5474de613799d1b"} Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.083407 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.227980 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-bound-sa-token\") pod \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.228055 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-certificates\") pod \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.228124 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-ca-trust-extracted\") pod \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.229306 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.240586 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.253291 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-trusted-ca\") pod \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.253374 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-installation-pull-secrets\") pod \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.253545 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.254099 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28km5\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-kube-api-access-28km5\") pod \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.254150 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-tls\") pod \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\" (UID: \"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b\") " Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.254654 4637 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.254675 4637 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.254719 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.257321 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-kube-api-access-28km5" (OuterVolumeSpecName: "kube-api-access-28km5") pod "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b"). InnerVolumeSpecName "kube-api-access-28km5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.260173 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.264775 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.268309 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.273761 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" (UID: "f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.355856 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28km5\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-kube-api-access-28km5\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.355916 4637 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.355959 4637 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.355976 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.355990 4637 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.851823 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"e755a6e309afd78aa4596b9c41707c84ca8de306749cc711e0f957364ca44ea0"} Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.854681 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" event={"ID":"f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b","Type":"ContainerDied","Data":"714729baab0eb8b0a9f9ac7e202d564a7697410dc9c3caadcb4e34cbbefdfc03"} Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.854734 4637 scope.go:117] "RemoveContainer" containerID="32a82240aab19455b47b27b284858d8e4f8e8a85c7fcc445c5474de613799d1b" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.854736 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8b86l" Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.904126 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8b86l"] Dec 01 14:52:46 crc kubenswrapper[4637]: I1201 14:52:46.908168 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8b86l"] Dec 01 14:52:47 crc kubenswrapper[4637]: I1201 14:52:47.778507 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" path="/var/lib/kubelet/pods/f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b/volumes" Dec 01 14:54:45 crc kubenswrapper[4637]: I1201 14:54:45.613212 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:54:45 crc kubenswrapper[4637]: I1201 14:54:45.613851 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:55:15 crc kubenswrapper[4637]: I1201 14:55:15.613915 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:55:15 crc kubenswrapper[4637]: I1201 14:55:15.614452 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.613700 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.614647 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.614737 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.615667 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e755a6e309afd78aa4596b9c41707c84ca8de306749cc711e0f957364ca44ea0"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.615724 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://e755a6e309afd78aa4596b9c41707c84ca8de306749cc711e0f957364ca44ea0" gracePeriod=600 Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.987915 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="e755a6e309afd78aa4596b9c41707c84ca8de306749cc711e0f957364ca44ea0" exitCode=0 Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.988186 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"e755a6e309afd78aa4596b9c41707c84ca8de306749cc711e0f957364ca44ea0"} Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.988499 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"f441d58a7fd53036d54f051f6c8a3463949b9941e99c7ef5c07b779f2546fa99"} Dec 01 14:55:45 crc kubenswrapper[4637]: I1201 14:55:45.988529 4637 scope.go:117] "RemoveContainer" containerID="4885cfa0f150a63ffa8391a1ed4c896a43e4cdd3b372dc06af2d7e94293fae9c" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.723859 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jfx77"] Dec 01 14:55:59 crc kubenswrapper[4637]: E1201 14:55:59.724951 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" containerName="registry" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.724966 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" containerName="registry" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.725063 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f242fd70-f7ff-48a3-b7ed-2a5d2082ba5b" containerName="registry" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.725462 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-jfx77" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.729968 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.730378 4637 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6jdfq" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.730538 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.751521 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jfx77"] Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.756583 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-8dkhl"] Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.757270 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-8dkhl" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.759707 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szpsq\" (UniqueName: \"kubernetes.io/projected/915faac2-3a6c-44de-8f47-a7d3c0aa2306-kube-api-access-szpsq\") pod \"cert-manager-cainjector-7f985d654d-jfx77\" (UID: \"915faac2-3a6c-44de-8f47-a7d3c0aa2306\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jfx77" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.761512 4637 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-df898" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.784766 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2gbzf"] Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.785729 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.787712 4637 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xxr9w" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.845897 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-8dkhl"] Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.859846 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2gbzf"] Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.871814 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szpsq\" (UniqueName: \"kubernetes.io/projected/915faac2-3a6c-44de-8f47-a7d3c0aa2306-kube-api-access-szpsq\") pod \"cert-manager-cainjector-7f985d654d-jfx77\" (UID: \"915faac2-3a6c-44de-8f47-a7d3c0aa2306\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jfx77" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.871920 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddlq\" (UniqueName: \"kubernetes.io/projected/6ada9875-197f-49ea-ae31-130a5e7a6229-kube-api-access-dddlq\") pod \"cert-manager-5b446d88c5-8dkhl\" (UID: \"6ada9875-197f-49ea-ae31-130a5e7a6229\") " pod="cert-manager/cert-manager-5b446d88c5-8dkhl" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.871974 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zvq\" (UniqueName: \"kubernetes.io/projected/96ee03cd-c317-432b-8918-7e13da710acb-kube-api-access-r4zvq\") pod \"cert-manager-webhook-5655c58dd6-2gbzf\" (UID: \"96ee03cd-c317-432b-8918-7e13da710acb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.909899 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szpsq\" (UniqueName: \"kubernetes.io/projected/915faac2-3a6c-44de-8f47-a7d3c0aa2306-kube-api-access-szpsq\") pod \"cert-manager-cainjector-7f985d654d-jfx77\" (UID: \"915faac2-3a6c-44de-8f47-a7d3c0aa2306\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jfx77" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.973594 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dddlq\" (UniqueName: \"kubernetes.io/projected/6ada9875-197f-49ea-ae31-130a5e7a6229-kube-api-access-dddlq\") pod \"cert-manager-5b446d88c5-8dkhl\" (UID: \"6ada9875-197f-49ea-ae31-130a5e7a6229\") " pod="cert-manager/cert-manager-5b446d88c5-8dkhl" Dec 01 14:55:59 crc kubenswrapper[4637]: I1201 14:55:59.973661 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4zvq\" (UniqueName: \"kubernetes.io/projected/96ee03cd-c317-432b-8918-7e13da710acb-kube-api-access-r4zvq\") pod \"cert-manager-webhook-5655c58dd6-2gbzf\" (UID: \"96ee03cd-c317-432b-8918-7e13da710acb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.000719 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dddlq\" (UniqueName: \"kubernetes.io/projected/6ada9875-197f-49ea-ae31-130a5e7a6229-kube-api-access-dddlq\") pod \"cert-manager-5b446d88c5-8dkhl\" (UID: \"6ada9875-197f-49ea-ae31-130a5e7a6229\") " pod="cert-manager/cert-manager-5b446d88c5-8dkhl" Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.000872 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4zvq\" (UniqueName: \"kubernetes.io/projected/96ee03cd-c317-432b-8918-7e13da710acb-kube-api-access-r4zvq\") pod \"cert-manager-webhook-5655c58dd6-2gbzf\" (UID: \"96ee03cd-c317-432b-8918-7e13da710acb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.046210 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-jfx77" Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.071388 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-8dkhl" Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.099342 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.323994 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jfx77"] Dec 01 14:56:00 crc kubenswrapper[4637]: W1201 14:56:00.338882 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod915faac2_3a6c_44de_8f47_a7d3c0aa2306.slice/crio-632827ec1905de4665b31a04fd34e8e9fccd21015c511e1acc363c78e5c9801e WatchSource:0}: Error finding container 632827ec1905de4665b31a04fd34e8e9fccd21015c511e1acc363c78e5c9801e: Status 404 returned error can't find the container with id 632827ec1905de4665b31a04fd34e8e9fccd21015c511e1acc363c78e5c9801e Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.348220 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.413203 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2gbzf"] Dec 01 14:56:00 crc kubenswrapper[4637]: W1201 14:56:00.425052 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96ee03cd_c317_432b_8918_7e13da710acb.slice/crio-d5fe024aa21ccbe2c4205ff1e607ce08738c91760d681e9e5cfe798bcee4b06c WatchSource:0}: Error finding container d5fe024aa21ccbe2c4205ff1e607ce08738c91760d681e9e5cfe798bcee4b06c: Status 404 returned error can't find the container with id d5fe024aa21ccbe2c4205ff1e607ce08738c91760d681e9e5cfe798bcee4b06c Dec 01 14:56:00 crc kubenswrapper[4637]: I1201 14:56:00.559455 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-8dkhl"] Dec 01 14:56:00 crc kubenswrapper[4637]: W1201 14:56:00.563187 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ada9875_197f_49ea_ae31_130a5e7a6229.slice/crio-648e92f4b4d4222cb8907ba9178de4dcf371ab92e1b5431b55e87e804c95493f WatchSource:0}: Error finding container 648e92f4b4d4222cb8907ba9178de4dcf371ab92e1b5431b55e87e804c95493f: Status 404 returned error can't find the container with id 648e92f4b4d4222cb8907ba9178de4dcf371ab92e1b5431b55e87e804c95493f Dec 01 14:56:01 crc kubenswrapper[4637]: I1201 14:56:01.126558 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" event={"ID":"96ee03cd-c317-432b-8918-7e13da710acb","Type":"ContainerStarted","Data":"d5fe024aa21ccbe2c4205ff1e607ce08738c91760d681e9e5cfe798bcee4b06c"} Dec 01 14:56:01 crc kubenswrapper[4637]: I1201 14:56:01.129388 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-jfx77" event={"ID":"915faac2-3a6c-44de-8f47-a7d3c0aa2306","Type":"ContainerStarted","Data":"632827ec1905de4665b31a04fd34e8e9fccd21015c511e1acc363c78e5c9801e"} Dec 01 14:56:01 crc kubenswrapper[4637]: I1201 14:56:01.131493 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-8dkhl" event={"ID":"6ada9875-197f-49ea-ae31-130a5e7a6229","Type":"ContainerStarted","Data":"648e92f4b4d4222cb8907ba9178de4dcf371ab92e1b5431b55e87e804c95493f"} Dec 01 14:56:05 crc kubenswrapper[4637]: I1201 14:56:05.179182 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-jfx77" event={"ID":"915faac2-3a6c-44de-8f47-a7d3c0aa2306","Type":"ContainerStarted","Data":"53f5bc704d7e49c22e3a12928b46e2106e95fc0ef6466e972b35b6e512fdf24c"} Dec 01 14:56:05 crc kubenswrapper[4637]: I1201 14:56:05.186247 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-8dkhl" event={"ID":"6ada9875-197f-49ea-ae31-130a5e7a6229","Type":"ContainerStarted","Data":"8e3467999be520588a29ff4eb3b229a787256f8cd91f3e96d76e9dacb4829ce3"} Dec 01 14:56:05 crc kubenswrapper[4637]: I1201 14:56:05.189252 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" event={"ID":"96ee03cd-c317-432b-8918-7e13da710acb","Type":"ContainerStarted","Data":"8a46fd9ecb6032c66c6494c6526b9d266d8d48595a4152943cc61aebfdc0b4d4"} Dec 01 14:56:05 crc kubenswrapper[4637]: I1201 14:56:05.189550 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" Dec 01 14:56:05 crc kubenswrapper[4637]: I1201 14:56:05.213089 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-jfx77" podStartSLOduration=2.600493949 podStartE2EDuration="6.213054777s" podCreationTimestamp="2025-12-01 14:55:59 +0000 UTC" firstStartedPulling="2025-12-01 14:56:00.347952352 +0000 UTC m=+610.865661180" lastFinishedPulling="2025-12-01 14:56:03.96051316 +0000 UTC m=+614.478222008" observedRunningTime="2025-12-01 14:56:05.206662634 +0000 UTC m=+615.724371462" watchObservedRunningTime="2025-12-01 14:56:05.213054777 +0000 UTC m=+615.730763605" Dec 01 14:56:05 crc kubenswrapper[4637]: I1201 14:56:05.239216 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-8dkhl" podStartSLOduration=2.825940939 podStartE2EDuration="6.239174892s" podCreationTimestamp="2025-12-01 14:55:59 +0000 UTC" firstStartedPulling="2025-12-01 14:56:00.566144418 +0000 UTC m=+611.083853246" lastFinishedPulling="2025-12-01 14:56:03.979378371 +0000 UTC m=+614.497087199" observedRunningTime="2025-12-01 14:56:05.23094959 +0000 UTC m=+615.748658418" watchObservedRunningTime="2025-12-01 14:56:05.239174892 +0000 UTC m=+615.756883720" Dec 01 14:56:05 crc kubenswrapper[4637]: I1201 14:56:05.254894 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" podStartSLOduration=2.712883074 podStartE2EDuration="6.254855346s" podCreationTimestamp="2025-12-01 14:55:59 +0000 UTC" firstStartedPulling="2025-12-01 14:56:00.427716397 +0000 UTC m=+610.945425225" lastFinishedPulling="2025-12-01 14:56:03.969688669 +0000 UTC m=+614.487397497" observedRunningTime="2025-12-01 14:56:05.252652007 +0000 UTC m=+615.770360835" watchObservedRunningTime="2025-12-01 14:56:05.254855346 +0000 UTC m=+615.772564174" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.075405 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rhl62"] Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.077271 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovn-controller" containerID="cri-o://ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd" gracePeriod=30 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.077438 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovn-acl-logging" containerID="cri-o://33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205" gracePeriod=30 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.077578 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="sbdb" containerID="cri-o://68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985" gracePeriod=30 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.077417 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989" gracePeriod=30 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.077669 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="nbdb" containerID="cri-o://0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372" gracePeriod=30 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.077346 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="northd" containerID="cri-o://230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855" gracePeriod=30 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.077376 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kube-rbac-proxy-node" containerID="cri-o://23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f" gracePeriod=30 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.103169 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-2gbzf" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.148220 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" containerID="cri-o://c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" gracePeriod=30 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.222519 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/2.log" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.226418 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/1.log" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.226476 4637 generic.go:334] "Generic (PLEG): container finished" podID="f64d8237-8116-4742-8d7f-9f6e8018e4c2" containerID="9cd2c8aa79d76f9a0e2c45cff0962ab688d8220c4f01310db1c1cd4d4910c4e4" exitCode=2 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.226627 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n2brl" event={"ID":"f64d8237-8116-4742-8d7f-9f6e8018e4c2","Type":"ContainerDied","Data":"9cd2c8aa79d76f9a0e2c45cff0962ab688d8220c4f01310db1c1cd4d4910c4e4"} Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.226757 4637 scope.go:117] "RemoveContainer" containerID="a163b9bea4f475be435e2bcf52012f4682da33e52b9a6b4b6dd6a71b59045a26" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.227538 4637 scope.go:117] "RemoveContainer" containerID="9cd2c8aa79d76f9a0e2c45cff0962ab688d8220c4f01310db1c1cd4d4910c4e4" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.227771 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-n2brl_openshift-multus(f64d8237-8116-4742-8d7f-9f6e8018e4c2)\"" pod="openshift-multus/multus-n2brl" podUID="f64d8237-8116-4742-8d7f-9f6e8018e4c2" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.235224 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/3.log" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.238254 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovn-acl-logging/0.log" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.241145 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovn-controller/0.log" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.246275 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f" exitCode=0 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.246319 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205" exitCode=143 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.246328 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd" exitCode=143 Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.246357 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f"} Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.246400 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205"} Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.246412 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd"} Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.435906 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/3.log" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.438205 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovn-acl-logging/0.log" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.439098 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovn-controller/0.log" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.439706 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.503603 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-57d4j"] Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.503900 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.503916 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.503926 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.503980 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.503994 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504000 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504008 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovn-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504014 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovn-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504022 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="nbdb" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504028 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="nbdb" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504088 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kube-rbac-proxy-node" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504095 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kube-rbac-proxy-node" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504106 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504113 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504123 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="northd" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504129 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="northd" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504139 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kubecfg-setup" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504145 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kubecfg-setup" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504167 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="sbdb" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504172 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="sbdb" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504183 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovn-acl-logging" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504188 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovn-acl-logging" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504339 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kube-rbac-proxy-node" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504353 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="sbdb" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504368 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504376 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504400 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504406 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="northd" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504413 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504418 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovn-acl-logging" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504427 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="nbdb" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504437 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovn-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504446 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504567 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504574 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: E1201 14:56:10.504584 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504589 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.504711 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerName="ovnkube-controller" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.506473 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536415 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-systemd-units\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536470 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-openvswitch\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536507 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-etc-openvswitch\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536529 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536576 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88g4p\" (UniqueName: \"kubernetes.io/projected/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-kube-api-access-88g4p\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536626 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-config\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536647 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-netd\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536655 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536676 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-systemd\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536712 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536739 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-ovn-kubernetes\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536810 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-bin\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536837 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-kubelet\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536865 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-node-log\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536889 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-slash\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536952 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-env-overrides\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.536980 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-netns\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537004 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovn-node-metrics-cert\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537032 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-script-lib\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537058 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-log-socket\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537081 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-var-lib-openvswitch\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537107 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-ovn\") pod \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\" (UID: \"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831\") " Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537235 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55wz\" (UniqueName: \"kubernetes.io/projected/e13f4bab-6312-459e-b86c-2c75e8ae83ee-kube-api-access-k55wz\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537277 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-node-log\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537305 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537335 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-run-netns\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537355 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovnkube-config\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537377 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537396 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-var-lib-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537413 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovnkube-script-lib\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537437 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-systemd-units\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537456 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-log-socket\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537491 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537503 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-ovn\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537521 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-kubelet\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537538 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-slash\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537557 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-env-overrides\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537582 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537598 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-cni-netd\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537619 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-etc-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537650 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-systemd\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537666 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-cni-bin\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537699 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovn-node-metrics-cert\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537737 4637 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537751 4637 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537760 4637 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537756 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537784 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537827 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537845 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537852 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537867 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-node-log" (OuterVolumeSpecName: "node-log") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.537883 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-slash" (OuterVolumeSpecName: "host-slash") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.538289 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.538321 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.538355 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-log-socket" (OuterVolumeSpecName: "log-socket") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.538380 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.538403 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.538614 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.538747 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.544563 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.544676 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-kube-api-access-88g4p" (OuterVolumeSpecName: "kube-api-access-88g4p") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "kube-api-access-88g4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.552670 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" (UID: "d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638321 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-run-netns\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638382 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovnkube-config\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638398 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638418 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-var-lib-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638438 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovnkube-script-lib\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638455 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-systemd-units\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638470 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-log-socket\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638489 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-ovn\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638503 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-kubelet\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638516 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-slash\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638534 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-env-overrides\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638552 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638565 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-cni-netd\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638580 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-etc-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638601 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-cni-bin\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638614 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-systemd\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638635 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovn-node-metrics-cert\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638653 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k55wz\" (UniqueName: \"kubernetes.io/projected/e13f4bab-6312-459e-b86c-2c75e8ae83ee-kube-api-access-k55wz\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638674 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-node-log\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638692 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638729 4637 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638739 4637 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638748 4637 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-slash\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638756 4637 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-node-log\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638765 4637 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638772 4637 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638780 4637 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638789 4637 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638797 4637 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-log-socket\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638805 4637 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638813 4637 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638821 4637 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638830 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88g4p\" (UniqueName: \"kubernetes.io/projected/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-kube-api-access-88g4p\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638840 4637 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638848 4637 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638856 4637 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638864 4637 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638901 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.638950 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-run-netns\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.639509 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovnkube-config\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.639544 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.639574 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-var-lib-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640093 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovnkube-script-lib\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640135 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-cni-netd\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640126 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640165 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-slash\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640180 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-systemd\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640238 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-etc-openvswitch\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640268 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-cni-bin\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640304 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-systemd-units\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640342 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-log-socket\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640378 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-run-ovn\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640342 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-host-kubelet\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640495 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e13f4bab-6312-459e-b86c-2c75e8ae83ee-node-log\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.640588 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e13f4bab-6312-459e-b86c-2c75e8ae83ee-env-overrides\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.648327 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e13f4bab-6312-459e-b86c-2c75e8ae83ee-ovn-node-metrics-cert\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.657930 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k55wz\" (UniqueName: \"kubernetes.io/projected/e13f4bab-6312-459e-b86c-2c75e8ae83ee-kube-api-access-k55wz\") pod \"ovnkube-node-57d4j\" (UID: \"e13f4bab-6312-459e-b86c-2c75e8ae83ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: I1201 14:56:10.829026 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:10 crc kubenswrapper[4637]: W1201 14:56:10.845784 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode13f4bab_6312_459e_b86c_2c75e8ae83ee.slice/crio-51e93010c51dd397b36392a8b71da28986f54c5ce0677a53279edcb390f38f99 WatchSource:0}: Error finding container 51e93010c51dd397b36392a8b71da28986f54c5ce0677a53279edcb390f38f99: Status 404 returned error can't find the container with id 51e93010c51dd397b36392a8b71da28986f54c5ce0677a53279edcb390f38f99 Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.256287 4637 generic.go:334] "Generic (PLEG): container finished" podID="e13f4bab-6312-459e-b86c-2c75e8ae83ee" containerID="a50624d1b7fd3cfc6a1cd6329306da7c7920cc9f651207404d1636a4bd0e8d91" exitCode=0 Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.256348 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerDied","Data":"a50624d1b7fd3cfc6a1cd6329306da7c7920cc9f651207404d1636a4bd0e8d91"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.256373 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"51e93010c51dd397b36392a8b71da28986f54c5ce0677a53279edcb390f38f99"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.260269 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/2.log" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.263633 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovnkube-controller/3.log" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.265576 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovn-acl-logging/0.log" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266021 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rhl62_d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/ovn-controller/0.log" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266463 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" exitCode=0 Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266502 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985" exitCode=0 Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266511 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372" exitCode=0 Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266522 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855" exitCode=0 Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266532 4637 generic.go:334] "Generic (PLEG): container finished" podID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" containerID="1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989" exitCode=0 Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266557 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266587 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266601 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266615 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266628 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266641 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" event={"ID":"d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831","Type":"ContainerDied","Data":"a212d35dc564b2ac9901c0e0d33b9ebd4c79caec187c26475edbeee582374122"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266656 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266669 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266677 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266685 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266694 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266701 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266709 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266717 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266725 4637 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405"} Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266741 4637 scope.go:117] "RemoveContainer" containerID="c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.266888 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhl62" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.286316 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.367630 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rhl62"] Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.370571 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rhl62"] Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.373440 4637 scope.go:117] "RemoveContainer" containerID="68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.393571 4637 scope.go:117] "RemoveContainer" containerID="0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.413574 4637 scope.go:117] "RemoveContainer" containerID="230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.430012 4637 scope.go:117] "RemoveContainer" containerID="1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.458817 4637 scope.go:117] "RemoveContainer" containerID="23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.483741 4637 scope.go:117] "RemoveContainer" containerID="33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.514696 4637 scope.go:117] "RemoveContainer" containerID="ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.543774 4637 scope.go:117] "RemoveContainer" containerID="6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.561488 4637 scope.go:117] "RemoveContainer" containerID="c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.564390 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": container with ID starting with c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1 not found: ID does not exist" containerID="c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.564466 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1"} err="failed to get container status \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": rpc error: code = NotFound desc = could not find container \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": container with ID starting with c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.564514 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.565401 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": container with ID starting with abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292 not found: ID does not exist" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.565437 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292"} err="failed to get container status \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": rpc error: code = NotFound desc = could not find container \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": container with ID starting with abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.565457 4637 scope.go:117] "RemoveContainer" containerID="68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.565701 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": container with ID starting with 68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985 not found: ID does not exist" containerID="68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.565728 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985"} err="failed to get container status \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": rpc error: code = NotFound desc = could not find container \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": container with ID starting with 68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.565749 4637 scope.go:117] "RemoveContainer" containerID="0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.566120 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": container with ID starting with 0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372 not found: ID does not exist" containerID="0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.566177 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372"} err="failed to get container status \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": rpc error: code = NotFound desc = could not find container \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": container with ID starting with 0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.566241 4637 scope.go:117] "RemoveContainer" containerID="230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.568363 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": container with ID starting with 230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855 not found: ID does not exist" containerID="230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.568409 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855"} err="failed to get container status \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": rpc error: code = NotFound desc = could not find container \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": container with ID starting with 230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.568440 4637 scope.go:117] "RemoveContainer" containerID="1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.568787 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": container with ID starting with 1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989 not found: ID does not exist" containerID="1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.568824 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989"} err="failed to get container status \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": rpc error: code = NotFound desc = could not find container \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": container with ID starting with 1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.568849 4637 scope.go:117] "RemoveContainer" containerID="23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.570241 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": container with ID starting with 23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f not found: ID does not exist" containerID="23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.570315 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f"} err="failed to get container status \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": rpc error: code = NotFound desc = could not find container \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": container with ID starting with 23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.570361 4637 scope.go:117] "RemoveContainer" containerID="33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.573422 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": container with ID starting with 33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205 not found: ID does not exist" containerID="33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.573472 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205"} err="failed to get container status \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": rpc error: code = NotFound desc = could not find container \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": container with ID starting with 33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.573490 4637 scope.go:117] "RemoveContainer" containerID="ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.573775 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": container with ID starting with ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd not found: ID does not exist" containerID="ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.573805 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd"} err="failed to get container status \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": rpc error: code = NotFound desc = could not find container \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": container with ID starting with ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.573819 4637 scope.go:117] "RemoveContainer" containerID="6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405" Dec 01 14:56:11 crc kubenswrapper[4637]: E1201 14:56:11.574114 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": container with ID starting with 6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405 not found: ID does not exist" containerID="6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.574206 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405"} err="failed to get container status \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": rpc error: code = NotFound desc = could not find container \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": container with ID starting with 6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.574219 4637 scope.go:117] "RemoveContainer" containerID="c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.574463 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1"} err="failed to get container status \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": rpc error: code = NotFound desc = could not find container \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": container with ID starting with c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.574491 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.574680 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292"} err="failed to get container status \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": rpc error: code = NotFound desc = could not find container \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": container with ID starting with abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.574696 4637 scope.go:117] "RemoveContainer" containerID="68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.576192 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985"} err="failed to get container status \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": rpc error: code = NotFound desc = could not find container \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": container with ID starting with 68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.576384 4637 scope.go:117] "RemoveContainer" containerID="0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.577040 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372"} err="failed to get container status \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": rpc error: code = NotFound desc = could not find container \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": container with ID starting with 0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.577060 4637 scope.go:117] "RemoveContainer" containerID="230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.577324 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855"} err="failed to get container status \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": rpc error: code = NotFound desc = could not find container \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": container with ID starting with 230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.577343 4637 scope.go:117] "RemoveContainer" containerID="1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.577784 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989"} err="failed to get container status \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": rpc error: code = NotFound desc = could not find container \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": container with ID starting with 1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.577801 4637 scope.go:117] "RemoveContainer" containerID="23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.578085 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f"} err="failed to get container status \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": rpc error: code = NotFound desc = could not find container \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": container with ID starting with 23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.578104 4637 scope.go:117] "RemoveContainer" containerID="33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.578393 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205"} err="failed to get container status \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": rpc error: code = NotFound desc = could not find container \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": container with ID starting with 33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.578409 4637 scope.go:117] "RemoveContainer" containerID="ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.578676 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd"} err="failed to get container status \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": rpc error: code = NotFound desc = could not find container \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": container with ID starting with ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.578691 4637 scope.go:117] "RemoveContainer" containerID="6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.578917 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405"} err="failed to get container status \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": rpc error: code = NotFound desc = could not find container \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": container with ID starting with 6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.578945 4637 scope.go:117] "RemoveContainer" containerID="c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.579177 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1"} err="failed to get container status \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": rpc error: code = NotFound desc = could not find container \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": container with ID starting with c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.579193 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.579441 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292"} err="failed to get container status \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": rpc error: code = NotFound desc = could not find container \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": container with ID starting with abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.579458 4637 scope.go:117] "RemoveContainer" containerID="68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.579690 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985"} err="failed to get container status \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": rpc error: code = NotFound desc = could not find container \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": container with ID starting with 68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.579705 4637 scope.go:117] "RemoveContainer" containerID="0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.579968 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372"} err="failed to get container status \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": rpc error: code = NotFound desc = could not find container \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": container with ID starting with 0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.579987 4637 scope.go:117] "RemoveContainer" containerID="230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.580310 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855"} err="failed to get container status \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": rpc error: code = NotFound desc = could not find container \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": container with ID starting with 230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.580327 4637 scope.go:117] "RemoveContainer" containerID="1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.580572 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989"} err="failed to get container status \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": rpc error: code = NotFound desc = could not find container \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": container with ID starting with 1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.580589 4637 scope.go:117] "RemoveContainer" containerID="23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.580822 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f"} err="failed to get container status \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": rpc error: code = NotFound desc = could not find container \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": container with ID starting with 23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.580839 4637 scope.go:117] "RemoveContainer" containerID="33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.581097 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205"} err="failed to get container status \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": rpc error: code = NotFound desc = could not find container \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": container with ID starting with 33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.581117 4637 scope.go:117] "RemoveContainer" containerID="ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.581397 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd"} err="failed to get container status \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": rpc error: code = NotFound desc = could not find container \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": container with ID starting with ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.581414 4637 scope.go:117] "RemoveContainer" containerID="6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.581639 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405"} err="failed to get container status \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": rpc error: code = NotFound desc = could not find container \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": container with ID starting with 6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.581659 4637 scope.go:117] "RemoveContainer" containerID="c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.582017 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1"} err="failed to get container status \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": rpc error: code = NotFound desc = could not find container \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": container with ID starting with c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.582044 4637 scope.go:117] "RemoveContainer" containerID="abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.582342 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292"} err="failed to get container status \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": rpc error: code = NotFound desc = could not find container \"abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292\": container with ID starting with abacfff4d74a32ab813d3484a687228a395e072b808c0229e190a5ebf1560292 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.582359 4637 scope.go:117] "RemoveContainer" containerID="68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.582625 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985"} err="failed to get container status \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": rpc error: code = NotFound desc = could not find container \"68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985\": container with ID starting with 68074de9c89c4b31292a97a9ee2a28cc6c98a0b2ed7be2e52b7402a33ad07985 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.582641 4637 scope.go:117] "RemoveContainer" containerID="0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.582912 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372"} err="failed to get container status \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": rpc error: code = NotFound desc = could not find container \"0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372\": container with ID starting with 0f542e813e2194fc205938bd86b94de0a82b4b37d96b73d971d2a7d263d06372 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.582959 4637 scope.go:117] "RemoveContainer" containerID="230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.584625 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855"} err="failed to get container status \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": rpc error: code = NotFound desc = could not find container \"230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855\": container with ID starting with 230b9834c6cfa008b6cb90b480bb1c2be4a643fca7be4d767cd4f3a30cdb6855 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.584742 4637 scope.go:117] "RemoveContainer" containerID="1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.585147 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989"} err="failed to get container status \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": rpc error: code = NotFound desc = could not find container \"1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989\": container with ID starting with 1b3cfe9d82c65569116ee776a9cc0faf9d982f80fc99943a746d0e890435b989 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.585177 4637 scope.go:117] "RemoveContainer" containerID="23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.585442 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f"} err="failed to get container status \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": rpc error: code = NotFound desc = could not find container \"23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f\": container with ID starting with 23e3c17cace26113ce39869c1bd647c95e1ceaf5a1e7f531e814d19b12a0690f not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.590167 4637 scope.go:117] "RemoveContainer" containerID="33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.590825 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205"} err="failed to get container status \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": rpc error: code = NotFound desc = could not find container \"33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205\": container with ID starting with 33ef98c3452b945fddc0f482620161387135d7dff03b5884e30c4bf827393205 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.590903 4637 scope.go:117] "RemoveContainer" containerID="ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.593209 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd"} err="failed to get container status \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": rpc error: code = NotFound desc = could not find container \"ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd\": container with ID starting with ae704e4d617a66826084e41c701fe2176665762bc24bf35f5bcabf238c38b4dd not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.593264 4637 scope.go:117] "RemoveContainer" containerID="6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.593679 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405"} err="failed to get container status \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": rpc error: code = NotFound desc = could not find container \"6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405\": container with ID starting with 6c2c47ce47e56f2df8984f0d121d3bb69f7808ceb4c9867ea5a2aece10a15405 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.593732 4637 scope.go:117] "RemoveContainer" containerID="c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.594205 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1"} err="failed to get container status \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": rpc error: code = NotFound desc = could not find container \"c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1\": container with ID starting with c843dcea4a6aafc43a9f42e159713edd1db99c0dd07f1db57f1e3eb04ff58ff1 not found: ID does not exist" Dec 01 14:56:11 crc kubenswrapper[4637]: I1201 14:56:11.780079 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831" path="/var/lib/kubelet/pods/d4fc1be7-621f-4fdc-bc4d-08b6b9e9e831/volumes" Dec 01 14:56:12 crc kubenswrapper[4637]: I1201 14:56:12.282781 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"4e8b94a37a0483203e42c88727bc783b8077940a89fa55745153ed8522e1690c"} Dec 01 14:56:12 crc kubenswrapper[4637]: I1201 14:56:12.283235 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"06a4cc5835a7c1eaa11bbc892936df2da73151fc408d3e9fd95c2de1205783d5"} Dec 01 14:56:12 crc kubenswrapper[4637]: I1201 14:56:12.283246 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"a71424e205b5910e7f94b963a70a0204837a38ad8dc6b4c8141c23b3550d9989"} Dec 01 14:56:12 crc kubenswrapper[4637]: I1201 14:56:12.283255 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"b7a49268b2e0b0209ec46f61a849801765bdf5d038263200913a1269db52ec10"} Dec 01 14:56:12 crc kubenswrapper[4637]: I1201 14:56:12.283264 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"d66c7de0a08346c9852c8e76917f0d46d0c792320ff75433e7d9c6e3d1152e9f"} Dec 01 14:56:12 crc kubenswrapper[4637]: I1201 14:56:12.283273 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"83dc044b4507033374176cf53f56d6a308337e901676e46e620e637ad287cd89"} Dec 01 14:56:15 crc kubenswrapper[4637]: I1201 14:56:15.317519 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"4b9773d0c61e357b6f40fa775444e012251b693bfc512499943e77c6ea3028d3"} Dec 01 14:56:17 crc kubenswrapper[4637]: I1201 14:56:17.336130 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" event={"ID":"e13f4bab-6312-459e-b86c-2c75e8ae83ee","Type":"ContainerStarted","Data":"85c48515d3dd635b41369f545853fe1f246e8cc52b968254e08e8865128b438e"} Dec 01 14:56:17 crc kubenswrapper[4637]: I1201 14:56:17.337261 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:17 crc kubenswrapper[4637]: I1201 14:56:17.337289 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:17 crc kubenswrapper[4637]: I1201 14:56:17.337347 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:17 crc kubenswrapper[4637]: I1201 14:56:17.376726 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:17 crc kubenswrapper[4637]: I1201 14:56:17.382538 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:17 crc kubenswrapper[4637]: I1201 14:56:17.387559 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" podStartSLOduration=7.387546172 podStartE2EDuration="7.387546172s" podCreationTimestamp="2025-12-01 14:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:56:17.381979002 +0000 UTC m=+627.899687830" watchObservedRunningTime="2025-12-01 14:56:17.387546172 +0000 UTC m=+627.905255000" Dec 01 14:56:22 crc kubenswrapper[4637]: I1201 14:56:22.772081 4637 scope.go:117] "RemoveContainer" containerID="9cd2c8aa79d76f9a0e2c45cff0962ab688d8220c4f01310db1c1cd4d4910c4e4" Dec 01 14:56:22 crc kubenswrapper[4637]: E1201 14:56:22.773359 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-n2brl_openshift-multus(f64d8237-8116-4742-8d7f-9f6e8018e4c2)\"" pod="openshift-multus/multus-n2brl" podUID="f64d8237-8116-4742-8d7f-9f6e8018e4c2" Dec 01 14:56:34 crc kubenswrapper[4637]: I1201 14:56:34.772026 4637 scope.go:117] "RemoveContainer" containerID="9cd2c8aa79d76f9a0e2c45cff0962ab688d8220c4f01310db1c1cd4d4910c4e4" Dec 01 14:56:35 crc kubenswrapper[4637]: I1201 14:56:35.452917 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n2brl_f64d8237-8116-4742-8d7f-9f6e8018e4c2/kube-multus/2.log" Dec 01 14:56:35 crc kubenswrapper[4637]: I1201 14:56:35.453531 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n2brl" event={"ID":"f64d8237-8116-4742-8d7f-9f6e8018e4c2","Type":"ContainerStarted","Data":"93a2a1df42505cfc402f30e69c0f3d8de4fb4071e3c028560c394dd6b5d3a16b"} Dec 01 14:56:40 crc kubenswrapper[4637]: I1201 14:56:40.857194 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-57d4j" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.498271 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r"] Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.500368 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.502163 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.515395 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r"] Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.552779 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.552959 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.553186 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8wx\" (UniqueName: \"kubernetes.io/projected/b474756e-aed2-462a-be8d-0ac67a276717-kube-api-access-2j8wx\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.654004 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.654098 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.654136 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j8wx\" (UniqueName: \"kubernetes.io/projected/b474756e-aed2-462a-be8d-0ac67a276717-kube-api-access-2j8wx\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.654747 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.655008 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.677503 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j8wx\" (UniqueName: \"kubernetes.io/projected/b474756e-aed2-462a-be8d-0ac67a276717-kube-api-access-2j8wx\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:50 crc kubenswrapper[4637]: I1201 14:56:50.818044 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:51 crc kubenswrapper[4637]: I1201 14:56:51.065075 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r"] Dec 01 14:56:51 crc kubenswrapper[4637]: W1201 14:56:51.074047 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb474756e_aed2_462a_be8d_0ac67a276717.slice/crio-04e18d0f281853e7c5846d5180691d65951a11a6af9fa6094364ae7701a71e92 WatchSource:0}: Error finding container 04e18d0f281853e7c5846d5180691d65951a11a6af9fa6094364ae7701a71e92: Status 404 returned error can't find the container with id 04e18d0f281853e7c5846d5180691d65951a11a6af9fa6094364ae7701a71e92 Dec 01 14:56:51 crc kubenswrapper[4637]: I1201 14:56:51.578122 4637 generic.go:334] "Generic (PLEG): container finished" podID="b474756e-aed2-462a-be8d-0ac67a276717" containerID="639c2afbefce8b67f0fd709321d059f6670ca4ef24d0a6bb8b0e2433bf14ba21" exitCode=0 Dec 01 14:56:51 crc kubenswrapper[4637]: I1201 14:56:51.578207 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" event={"ID":"b474756e-aed2-462a-be8d-0ac67a276717","Type":"ContainerDied","Data":"639c2afbefce8b67f0fd709321d059f6670ca4ef24d0a6bb8b0e2433bf14ba21"} Dec 01 14:56:51 crc kubenswrapper[4637]: I1201 14:56:51.578654 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" event={"ID":"b474756e-aed2-462a-be8d-0ac67a276717","Type":"ContainerStarted","Data":"04e18d0f281853e7c5846d5180691d65951a11a6af9fa6094364ae7701a71e92"} Dec 01 14:56:53 crc kubenswrapper[4637]: I1201 14:56:53.591863 4637 generic.go:334] "Generic (PLEG): container finished" podID="b474756e-aed2-462a-be8d-0ac67a276717" containerID="87a8e7bf30f9c06740e292833bc390993574b916dab21c5980b337973520abf4" exitCode=0 Dec 01 14:56:53 crc kubenswrapper[4637]: I1201 14:56:53.592115 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" event={"ID":"b474756e-aed2-462a-be8d-0ac67a276717","Type":"ContainerDied","Data":"87a8e7bf30f9c06740e292833bc390993574b916dab21c5980b337973520abf4"} Dec 01 14:56:54 crc kubenswrapper[4637]: I1201 14:56:54.600851 4637 generic.go:334] "Generic (PLEG): container finished" podID="b474756e-aed2-462a-be8d-0ac67a276717" containerID="d889ac7547dc6594b3c9a30f0a9acda8ec3f758f405ccea284d9e3c12ddef90d" exitCode=0 Dec 01 14:56:54 crc kubenswrapper[4637]: I1201 14:56:54.600918 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" event={"ID":"b474756e-aed2-462a-be8d-0ac67a276717","Type":"ContainerDied","Data":"d889ac7547dc6594b3c9a30f0a9acda8ec3f758f405ccea284d9e3c12ddef90d"} Dec 01 14:56:55 crc kubenswrapper[4637]: I1201 14:56:55.869335 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.029835 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-util\") pod \"b474756e-aed2-462a-be8d-0ac67a276717\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.029964 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-bundle\") pod \"b474756e-aed2-462a-be8d-0ac67a276717\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.030113 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j8wx\" (UniqueName: \"kubernetes.io/projected/b474756e-aed2-462a-be8d-0ac67a276717-kube-api-access-2j8wx\") pod \"b474756e-aed2-462a-be8d-0ac67a276717\" (UID: \"b474756e-aed2-462a-be8d-0ac67a276717\") " Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.030719 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-bundle" (OuterVolumeSpecName: "bundle") pod "b474756e-aed2-462a-be8d-0ac67a276717" (UID: "b474756e-aed2-462a-be8d-0ac67a276717"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.037177 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b474756e-aed2-462a-be8d-0ac67a276717-kube-api-access-2j8wx" (OuterVolumeSpecName: "kube-api-access-2j8wx") pod "b474756e-aed2-462a-be8d-0ac67a276717" (UID: "b474756e-aed2-462a-be8d-0ac67a276717"). InnerVolumeSpecName "kube-api-access-2j8wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.046628 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-util" (OuterVolumeSpecName: "util") pod "b474756e-aed2-462a-be8d-0ac67a276717" (UID: "b474756e-aed2-462a-be8d-0ac67a276717"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.132606 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j8wx\" (UniqueName: \"kubernetes.io/projected/b474756e-aed2-462a-be8d-0ac67a276717-kube-api-access-2j8wx\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.132666 4637 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-util\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.132681 4637 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b474756e-aed2-462a-be8d-0ac67a276717-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.618119 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" event={"ID":"b474756e-aed2-462a-be8d-0ac67a276717","Type":"ContainerDied","Data":"04e18d0f281853e7c5846d5180691d65951a11a6af9fa6094364ae7701a71e92"} Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.618193 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e18d0f281853e7c5846d5180691d65951a11a6af9fa6094364ae7701a71e92" Dec 01 14:56:56 crc kubenswrapper[4637]: I1201 14:56:56.618205 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.079044 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg"] Dec 01 14:56:58 crc kubenswrapper[4637]: E1201 14:56:58.079575 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b474756e-aed2-462a-be8d-0ac67a276717" containerName="pull" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.079588 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b474756e-aed2-462a-be8d-0ac67a276717" containerName="pull" Dec 01 14:56:58 crc kubenswrapper[4637]: E1201 14:56:58.079605 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b474756e-aed2-462a-be8d-0ac67a276717" containerName="extract" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.079614 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b474756e-aed2-462a-be8d-0ac67a276717" containerName="extract" Dec 01 14:56:58 crc kubenswrapper[4637]: E1201 14:56:58.079629 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b474756e-aed2-462a-be8d-0ac67a276717" containerName="util" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.079637 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b474756e-aed2-462a-be8d-0ac67a276717" containerName="util" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.079757 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b474756e-aed2-462a-be8d-0ac67a276717" containerName="extract" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.080202 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.082684 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-q5lz9" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.082689 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.082976 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.098187 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg"] Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.159905 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpcgd\" (UniqueName: \"kubernetes.io/projected/802cfe24-78e7-428d-89b5-04b5a610b9fb-kube-api-access-hpcgd\") pod \"nmstate-operator-5b5b58f5c8-ccqrg\" (UID: \"802cfe24-78e7-428d-89b5-04b5a610b9fb\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.261004 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpcgd\" (UniqueName: \"kubernetes.io/projected/802cfe24-78e7-428d-89b5-04b5a610b9fb-kube-api-access-hpcgd\") pod \"nmstate-operator-5b5b58f5c8-ccqrg\" (UID: \"802cfe24-78e7-428d-89b5-04b5a610b9fb\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.280870 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpcgd\" (UniqueName: \"kubernetes.io/projected/802cfe24-78e7-428d-89b5-04b5a610b9fb-kube-api-access-hpcgd\") pod \"nmstate-operator-5b5b58f5c8-ccqrg\" (UID: \"802cfe24-78e7-428d-89b5-04b5a610b9fb\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.402556 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg" Dec 01 14:56:58 crc kubenswrapper[4637]: I1201 14:56:58.644500 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg"] Dec 01 14:56:59 crc kubenswrapper[4637]: I1201 14:56:59.635515 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg" event={"ID":"802cfe24-78e7-428d-89b5-04b5a610b9fb","Type":"ContainerStarted","Data":"72ff4f1813f72096a4487b0e52e4c4ccd4284905a96618669443c8741e5b1e37"} Dec 01 14:57:01 crc kubenswrapper[4637]: I1201 14:57:01.648897 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg" event={"ID":"802cfe24-78e7-428d-89b5-04b5a610b9fb","Type":"ContainerStarted","Data":"93b66b9ece7164d727e120b8b150f0021ee1985f7e15bca03abe45843b31e789"} Dec 01 14:57:01 crc kubenswrapper[4637]: I1201 14:57:01.670677 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ccqrg" podStartSLOduration=1.071348138 podStartE2EDuration="3.670648175s" podCreationTimestamp="2025-12-01 14:56:58 +0000 UTC" firstStartedPulling="2025-12-01 14:56:58.651073513 +0000 UTC m=+669.168782341" lastFinishedPulling="2025-12-01 14:57:01.25037355 +0000 UTC m=+671.768082378" observedRunningTime="2025-12-01 14:57:01.666975316 +0000 UTC m=+672.184684164" watchObservedRunningTime="2025-12-01 14:57:01.670648175 +0000 UTC m=+672.188357003" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.685804 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd"] Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.687792 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.695558 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-6nmjv" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.705667 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4"] Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.706839 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.713033 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.738938 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vjj7n"] Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.740019 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.742317 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd"] Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.745656 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4"] Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.746023 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3ed53917-b528-4d89-9503-578c448fd6c7-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-mz4j4\" (UID: \"3ed53917-b528-4d89-9503-578c448fd6c7\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.746073 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9rh\" (UniqueName: \"kubernetes.io/projected/3ed53917-b528-4d89-9503-578c448fd6c7-kube-api-access-6l9rh\") pod \"nmstate-webhook-5f6d4c5ccb-mz4j4\" (UID: \"3ed53917-b528-4d89-9503-578c448fd6c7\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.746118 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s77xk\" (UniqueName: \"kubernetes.io/projected/508d135b-1c5a-49db-a896-e7489b8c9968-kube-api-access-s77xk\") pod \"nmstate-metrics-7f946cbc9-tnkgd\" (UID: \"508d135b-1c5a-49db-a896-e7489b8c9968\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.847913 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-nmstate-lock\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.848153 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s77xk\" (UniqueName: \"kubernetes.io/projected/508d135b-1c5a-49db-a896-e7489b8c9968-kube-api-access-s77xk\") pod \"nmstate-metrics-7f946cbc9-tnkgd\" (UID: \"508d135b-1c5a-49db-a896-e7489b8c9968\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.848214 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-dbus-socket\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.848245 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9swc8\" (UniqueName: \"kubernetes.io/projected/7375b015-a69b-4993-abf8-6c18215144da-kube-api-access-9swc8\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.848398 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-ovs-socket\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.848450 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3ed53917-b528-4d89-9503-578c448fd6c7-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-mz4j4\" (UID: \"3ed53917-b528-4d89-9503-578c448fd6c7\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.848528 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l9rh\" (UniqueName: \"kubernetes.io/projected/3ed53917-b528-4d89-9503-578c448fd6c7-kube-api-access-6l9rh\") pod \"nmstate-webhook-5f6d4c5ccb-mz4j4\" (UID: \"3ed53917-b528-4d89-9503-578c448fd6c7\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:02 crc kubenswrapper[4637]: E1201 14:57:02.849016 4637 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 01 14:57:02 crc kubenswrapper[4637]: E1201 14:57:02.849078 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ed53917-b528-4d89-9503-578c448fd6c7-tls-key-pair podName:3ed53917-b528-4d89-9503-578c448fd6c7 nodeName:}" failed. No retries permitted until 2025-12-01 14:57:03.349055176 +0000 UTC m=+673.866764004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3ed53917-b528-4d89-9503-578c448fd6c7-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-mz4j4" (UID: "3ed53917-b528-4d89-9503-578c448fd6c7") : secret "openshift-nmstate-webhook" not found Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.881655 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s77xk\" (UniqueName: \"kubernetes.io/projected/508d135b-1c5a-49db-a896-e7489b8c9968-kube-api-access-s77xk\") pod \"nmstate-metrics-7f946cbc9-tnkgd\" (UID: \"508d135b-1c5a-49db-a896-e7489b8c9968\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.887169 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l9rh\" (UniqueName: \"kubernetes.io/projected/3ed53917-b528-4d89-9503-578c448fd6c7-kube-api-access-6l9rh\") pod \"nmstate-webhook-5f6d4c5ccb-mz4j4\" (UID: \"3ed53917-b528-4d89-9503-578c448fd6c7\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.946477 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6"] Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.947381 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.949812 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-dbus-socket\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.949845 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9swc8\" (UniqueName: \"kubernetes.io/projected/7375b015-a69b-4993-abf8-6c18215144da-kube-api-access-9swc8\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.949886 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-ovs-socket\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.949953 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-nmstate-lock\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.950031 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-nmstate-lock\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.950402 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-dbus-socket\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.950684 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7375b015-a69b-4993-abf8-6c18215144da-ovs-socket\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.951047 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-x5j5q" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.951227 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.953067 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 01 14:57:02 crc kubenswrapper[4637]: I1201 14:57:02.984080 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6"] Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.001453 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9swc8\" (UniqueName: \"kubernetes.io/projected/7375b015-a69b-4993-abf8-6c18215144da-kube-api-access-9swc8\") pod \"nmstate-handler-vjj7n\" (UID: \"7375b015-a69b-4993-abf8-6c18215144da\") " pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.012499 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.052519 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw828\" (UniqueName: \"kubernetes.io/projected/8c3b9a86-e588-47f5-a465-45691a6808e1-kube-api-access-rw828\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.052789 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c3b9a86-e588-47f5-a465-45691a6808e1-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.053179 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8c3b9a86-e588-47f5-a465-45691a6808e1-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.055329 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:03 crc kubenswrapper[4637]: W1201 14:57:03.140228 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7375b015_a69b_4993_abf8_6c18215144da.slice/crio-d4b2acc2d3622517196a40286d7ff5c9fdbb6af5b7be0ec20ca5356a628d0daa WatchSource:0}: Error finding container d4b2acc2d3622517196a40286d7ff5c9fdbb6af5b7be0ec20ca5356a628d0daa: Status 404 returned error can't find the container with id d4b2acc2d3622517196a40286d7ff5c9fdbb6af5b7be0ec20ca5356a628d0daa Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.155450 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw828\" (UniqueName: \"kubernetes.io/projected/8c3b9a86-e588-47f5-a465-45691a6808e1-kube-api-access-rw828\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.155542 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c3b9a86-e588-47f5-a465-45691a6808e1-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.155613 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8c3b9a86-e588-47f5-a465-45691a6808e1-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.156783 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8c3b9a86-e588-47f5-a465-45691a6808e1-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.167736 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c3b9a86-e588-47f5-a465-45691a6808e1-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.185875 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw828\" (UniqueName: \"kubernetes.io/projected/8c3b9a86-e588-47f5-a465-45691a6808e1-kube-api-access-rw828\") pod \"nmstate-console-plugin-7fbb5f6569-vggm6\" (UID: \"8c3b9a86-e588-47f5-a465-45691a6808e1\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.241500 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-59b6c6f859-gk6kt"] Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.279547 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.290661 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59b6c6f859-gk6kt"] Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.295111 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.396278 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-service-ca\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.396654 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-config\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.396724 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-oauth-serving-cert\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.396760 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3ed53917-b528-4d89-9503-578c448fd6c7-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-mz4j4\" (UID: \"3ed53917-b528-4d89-9503-578c448fd6c7\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.396793 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9kcq\" (UniqueName: \"kubernetes.io/projected/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-kube-api-access-q9kcq\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.396811 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-serving-cert\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.396852 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-trusted-ca-bundle\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.396871 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-oauth-config\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.412178 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3ed53917-b528-4d89-9503-578c448fd6c7-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-mz4j4\" (UID: \"3ed53917-b528-4d89-9503-578c448fd6c7\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.438707 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd"] Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.500332 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-service-ca\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.500437 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-config\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.501875 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-oauth-serving-cert\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.502748 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-service-ca\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.506455 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-config\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.506541 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-oauth-serving-cert\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.506768 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9kcq\" (UniqueName: \"kubernetes.io/projected/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-kube-api-access-q9kcq\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.506799 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-serving-cert\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.506880 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-trusted-ca-bundle\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.506899 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-oauth-config\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.509813 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-trusted-ca-bundle\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.515216 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-oauth-config\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.518948 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-console-serving-cert\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.532314 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9kcq\" (UniqueName: \"kubernetes.io/projected/e597cfc6-66c3-4b5f-8fd2-0e814ad221e8-kube-api-access-q9kcq\") pod \"console-59b6c6f859-gk6kt\" (UID: \"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8\") " pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.613816 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.621228 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.660475 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6"] Dec 01 14:57:03 crc kubenswrapper[4637]: W1201 14:57:03.663183 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c3b9a86_e588_47f5_a465_45691a6808e1.slice/crio-e7c2184d2a36691ef8ecefa8830e7d40822bcfe9ed178b97094800d06a3905bb WatchSource:0}: Error finding container e7c2184d2a36691ef8ecefa8830e7d40822bcfe9ed178b97094800d06a3905bb: Status 404 returned error can't find the container with id e7c2184d2a36691ef8ecefa8830e7d40822bcfe9ed178b97094800d06a3905bb Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.666074 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vjj7n" event={"ID":"7375b015-a69b-4993-abf8-6c18215144da","Type":"ContainerStarted","Data":"d4b2acc2d3622517196a40286d7ff5c9fdbb6af5b7be0ec20ca5356a628d0daa"} Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.667120 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" event={"ID":"508d135b-1c5a-49db-a896-e7489b8c9968","Type":"ContainerStarted","Data":"f26b572b2f75d84fd8c2da3c93d1ade1f9971d26328b9c95a3dee133358e6b84"} Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.883072 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59b6c6f859-gk6kt"] Dec 01 14:57:03 crc kubenswrapper[4637]: I1201 14:57:03.954581 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4"] Dec 01 14:57:03 crc kubenswrapper[4637]: W1201 14:57:03.968796 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ed53917_b528_4d89_9503_578c448fd6c7.slice/crio-b491afd46fe358f5ada1acf1b834f787ab70353f9328e16676bbdea524099dbc WatchSource:0}: Error finding container b491afd46fe358f5ada1acf1b834f787ab70353f9328e16676bbdea524099dbc: Status 404 returned error can't find the container with id b491afd46fe358f5ada1acf1b834f787ab70353f9328e16676bbdea524099dbc Dec 01 14:57:04 crc kubenswrapper[4637]: I1201 14:57:04.675055 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59b6c6f859-gk6kt" event={"ID":"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8","Type":"ContainerStarted","Data":"845ab217950a56b9acd8f8363ad41e809e36af24806666cf0f5333223eb25522"} Dec 01 14:57:04 crc kubenswrapper[4637]: I1201 14:57:04.675120 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59b6c6f859-gk6kt" event={"ID":"e597cfc6-66c3-4b5f-8fd2-0e814ad221e8","Type":"ContainerStarted","Data":"06c180bfc4470ad86424772ff42351c133fd68a51b3f0c913acfa76c8a8fe0b4"} Dec 01 14:57:04 crc kubenswrapper[4637]: I1201 14:57:04.686898 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" event={"ID":"8c3b9a86-e588-47f5-a465-45691a6808e1","Type":"ContainerStarted","Data":"e7c2184d2a36691ef8ecefa8830e7d40822bcfe9ed178b97094800d06a3905bb"} Dec 01 14:57:04 crc kubenswrapper[4637]: I1201 14:57:04.689380 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" event={"ID":"3ed53917-b528-4d89-9503-578c448fd6c7","Type":"ContainerStarted","Data":"b491afd46fe358f5ada1acf1b834f787ab70353f9328e16676bbdea524099dbc"} Dec 01 14:57:04 crc kubenswrapper[4637]: I1201 14:57:04.702380 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59b6c6f859-gk6kt" podStartSLOduration=1.702354985 podStartE2EDuration="1.702354985s" podCreationTimestamp="2025-12-01 14:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:57:04.700885205 +0000 UTC m=+675.218594043" watchObservedRunningTime="2025-12-01 14:57:04.702354985 +0000 UTC m=+675.220063813" Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.716315 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vjj7n" event={"ID":"7375b015-a69b-4993-abf8-6c18215144da","Type":"ContainerStarted","Data":"a147e5d78f711c11442a7d5b712956db3a09939fab4796722a5482c84d6d4785"} Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.716838 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.718185 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" event={"ID":"508d135b-1c5a-49db-a896-e7489b8c9968","Type":"ContainerStarted","Data":"c9f45875df0222e36d086b48743474a4084b9beaa8c2dec6643159cea3f8b467"} Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.719864 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" event={"ID":"8c3b9a86-e588-47f5-a465-45691a6808e1","Type":"ContainerStarted","Data":"4f8b28a417166c90f2dcadafac6844569b952a8c6e01d7329dec96f18af09747"} Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.721586 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" event={"ID":"3ed53917-b528-4d89-9503-578c448fd6c7","Type":"ContainerStarted","Data":"897213535fcff973a071b7770bf871cbf3ac9ccacd0e576689906fbc4c3a75a8"} Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.722623 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.737333 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vjj7n" podStartSLOduration=2.294672898 podStartE2EDuration="5.737314401s" podCreationTimestamp="2025-12-01 14:57:02 +0000 UTC" firstStartedPulling="2025-12-01 14:57:03.14633247 +0000 UTC m=+673.664041298" lastFinishedPulling="2025-12-01 14:57:06.588973983 +0000 UTC m=+677.106682801" observedRunningTime="2025-12-01 14:57:07.733690554 +0000 UTC m=+678.251399392" watchObservedRunningTime="2025-12-01 14:57:07.737314401 +0000 UTC m=+678.255023229" Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.748892 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-vggm6" podStartSLOduration=2.832774035 podStartE2EDuration="5.748857473s" podCreationTimestamp="2025-12-01 14:57:02 +0000 UTC" firstStartedPulling="2025-12-01 14:57:03.665443544 +0000 UTC m=+674.183152372" lastFinishedPulling="2025-12-01 14:57:06.581526982 +0000 UTC m=+677.099235810" observedRunningTime="2025-12-01 14:57:07.746035277 +0000 UTC m=+678.263744105" watchObservedRunningTime="2025-12-01 14:57:07.748857473 +0000 UTC m=+678.266566301" Dec 01 14:57:07 crc kubenswrapper[4637]: I1201 14:57:07.772771 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" podStartSLOduration=3.133485694 podStartE2EDuration="5.772751839s" podCreationTimestamp="2025-12-01 14:57:02 +0000 UTC" firstStartedPulling="2025-12-01 14:57:03.972492202 +0000 UTC m=+674.490201030" lastFinishedPulling="2025-12-01 14:57:06.611758347 +0000 UTC m=+677.129467175" observedRunningTime="2025-12-01 14:57:07.769167902 +0000 UTC m=+678.286876730" watchObservedRunningTime="2025-12-01 14:57:07.772751839 +0000 UTC m=+678.290460667" Dec 01 14:57:09 crc kubenswrapper[4637]: I1201 14:57:09.735058 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" event={"ID":"508d135b-1c5a-49db-a896-e7489b8c9968","Type":"ContainerStarted","Data":"d017051a283bec824e2c42b5cb23a608e776c9003892d0dd3621f362de9de302"} Dec 01 14:57:09 crc kubenswrapper[4637]: I1201 14:57:09.800702 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tnkgd" podStartSLOduration=1.9311685760000001 podStartE2EDuration="7.800680433s" podCreationTimestamp="2025-12-01 14:57:02 +0000 UTC" firstStartedPulling="2025-12-01 14:57:03.510068258 +0000 UTC m=+674.027777086" lastFinishedPulling="2025-12-01 14:57:09.379580115 +0000 UTC m=+679.897288943" observedRunningTime="2025-12-01 14:57:09.800289422 +0000 UTC m=+680.317998260" watchObservedRunningTime="2025-12-01 14:57:09.800680433 +0000 UTC m=+680.318389261" Dec 01 14:57:13 crc kubenswrapper[4637]: I1201 14:57:13.084198 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vjj7n" Dec 01 14:57:13 crc kubenswrapper[4637]: I1201 14:57:13.614290 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:13 crc kubenswrapper[4637]: I1201 14:57:13.614367 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:13 crc kubenswrapper[4637]: I1201 14:57:13.619279 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:13 crc kubenswrapper[4637]: I1201 14:57:13.759456 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-59b6c6f859-gk6kt" Dec 01 14:57:13 crc kubenswrapper[4637]: I1201 14:57:13.830962 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-98z2t"] Dec 01 14:57:23 crc kubenswrapper[4637]: I1201 14:57:23.627733 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz4j4" Dec 01 14:57:38 crc kubenswrapper[4637]: I1201 14:57:38.873887 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-98z2t" podUID="6462925c-d528-4dd6-a6e1-55563db83168" containerName="console" containerID="cri-o://b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79" gracePeriod=15 Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.268327 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-98z2t_6462925c-d528-4dd6-a6e1-55563db83168/console/0.log" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.268891 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.359426 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-console-config\") pod \"6462925c-d528-4dd6-a6e1-55563db83168\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.359519 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv54r\" (UniqueName: \"kubernetes.io/projected/6462925c-d528-4dd6-a6e1-55563db83168-kube-api-access-sv54r\") pod \"6462925c-d528-4dd6-a6e1-55563db83168\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.359585 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-serving-cert\") pod \"6462925c-d528-4dd6-a6e1-55563db83168\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.359694 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-oauth-config\") pod \"6462925c-d528-4dd6-a6e1-55563db83168\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.359843 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-oauth-serving-cert\") pod \"6462925c-d528-4dd6-a6e1-55563db83168\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.361426 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-trusted-ca-bundle\") pod \"6462925c-d528-4dd6-a6e1-55563db83168\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.361254 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6462925c-d528-4dd6-a6e1-55563db83168" (UID: "6462925c-d528-4dd6-a6e1-55563db83168"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.361470 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-service-ca\") pod \"6462925c-d528-4dd6-a6e1-55563db83168\" (UID: \"6462925c-d528-4dd6-a6e1-55563db83168\") " Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.362026 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-service-ca" (OuterVolumeSpecName: "service-ca") pod "6462925c-d528-4dd6-a6e1-55563db83168" (UID: "6462925c-d528-4dd6-a6e1-55563db83168"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.362049 4637 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.362081 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6462925c-d528-4dd6-a6e1-55563db83168" (UID: "6462925c-d528-4dd6-a6e1-55563db83168"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.362671 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-console-config" (OuterVolumeSpecName: "console-config") pod "6462925c-d528-4dd6-a6e1-55563db83168" (UID: "6462925c-d528-4dd6-a6e1-55563db83168"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.369624 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6462925c-d528-4dd6-a6e1-55563db83168" (UID: "6462925c-d528-4dd6-a6e1-55563db83168"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.372724 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6462925c-d528-4dd6-a6e1-55563db83168" (UID: "6462925c-d528-4dd6-a6e1-55563db83168"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.374168 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6462925c-d528-4dd6-a6e1-55563db83168-kube-api-access-sv54r" (OuterVolumeSpecName: "kube-api-access-sv54r") pod "6462925c-d528-4dd6-a6e1-55563db83168" (UID: "6462925c-d528-4dd6-a6e1-55563db83168"). InnerVolumeSpecName "kube-api-access-sv54r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.463999 4637 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.464063 4637 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.464074 4637 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.464083 4637 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6462925c-d528-4dd6-a6e1-55563db83168-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.464094 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv54r\" (UniqueName: \"kubernetes.io/projected/6462925c-d528-4dd6-a6e1-55563db83168-kube-api-access-sv54r\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.464108 4637 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6462925c-d528-4dd6-a6e1-55563db83168-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.686727 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs"] Dec 01 14:57:39 crc kubenswrapper[4637]: E1201 14:57:39.687293 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6462925c-d528-4dd6-a6e1-55563db83168" containerName="console" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.687361 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="6462925c-d528-4dd6-a6e1-55563db83168" containerName="console" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.687541 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="6462925c-d528-4dd6-a6e1-55563db83168" containerName="console" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.688381 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.690602 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.756921 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs"] Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.769829 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-684mt\" (UniqueName: \"kubernetes.io/projected/b46b721a-cd47-48db-b343-ea841d5ae9fc-kube-api-access-684mt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.770409 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.770538 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.872097 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-684mt\" (UniqueName: \"kubernetes.io/projected/b46b721a-cd47-48db-b343-ea841d5ae9fc-kube-api-access-684mt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.873856 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.873961 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.874804 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.877192 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.895013 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-684mt\" (UniqueName: \"kubernetes.io/projected/b46b721a-cd47-48db-b343-ea841d5ae9fc-kube-api-access-684mt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.927552 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-98z2t_6462925c-d528-4dd6-a6e1-55563db83168/console/0.log" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.927607 4637 generic.go:334] "Generic (PLEG): container finished" podID="6462925c-d528-4dd6-a6e1-55563db83168" containerID="b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79" exitCode=2 Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.927690 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-98z2t" event={"ID":"6462925c-d528-4dd6-a6e1-55563db83168","Type":"ContainerDied","Data":"b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79"} Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.927711 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-98z2t" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.927762 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-98z2t" event={"ID":"6462925c-d528-4dd6-a6e1-55563db83168","Type":"ContainerDied","Data":"b7a5f1dff888a91716290436831671ce203848f808d1ec04a544e469ae1923be"} Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.927790 4637 scope.go:117] "RemoveContainer" containerID="b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.960658 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-98z2t"] Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.961613 4637 scope.go:117] "RemoveContainer" containerID="b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79" Dec 01 14:57:39 crc kubenswrapper[4637]: E1201 14:57:39.965116 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79\": container with ID starting with b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79 not found: ID does not exist" containerID="b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.965244 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79"} err="failed to get container status \"b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79\": rpc error: code = NotFound desc = could not find container \"b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79\": container with ID starting with b479937b0ff47a530a7370ab49264dfc8d3adcf73b708f0f19db29a52792de79 not found: ID does not exist" Dec 01 14:57:39 crc kubenswrapper[4637]: I1201 14:57:39.966491 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-98z2t"] Dec 01 14:57:40 crc kubenswrapper[4637]: I1201 14:57:40.062056 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:40 crc kubenswrapper[4637]: I1201 14:57:40.508901 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs"] Dec 01 14:57:40 crc kubenswrapper[4637]: I1201 14:57:40.937885 4637 generic.go:334] "Generic (PLEG): container finished" podID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerID="75ae21bfb12397ec962f4f5dbaebd5f511e43439fc1e7cc1c3cd4e1fea6dce54" exitCode=0 Dec 01 14:57:40 crc kubenswrapper[4637]: I1201 14:57:40.938201 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" event={"ID":"b46b721a-cd47-48db-b343-ea841d5ae9fc","Type":"ContainerDied","Data":"75ae21bfb12397ec962f4f5dbaebd5f511e43439fc1e7cc1c3cd4e1fea6dce54"} Dec 01 14:57:40 crc kubenswrapper[4637]: I1201 14:57:40.938489 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" event={"ID":"b46b721a-cd47-48db-b343-ea841d5ae9fc","Type":"ContainerStarted","Data":"95ef1febf4ca919ecb1acb56d0136db669d2043d04c0f2375c657bc8b7431209"} Dec 01 14:57:41 crc kubenswrapper[4637]: I1201 14:57:41.778666 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6462925c-d528-4dd6-a6e1-55563db83168" path="/var/lib/kubelet/pods/6462925c-d528-4dd6-a6e1-55563db83168/volumes" Dec 01 14:57:42 crc kubenswrapper[4637]: I1201 14:57:42.962785 4637 generic.go:334] "Generic (PLEG): container finished" podID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerID="77443ddb9e4e10d4f70e249e4bd0ccddf762f1e91ec5c02e821835b5c2f55ac6" exitCode=0 Dec 01 14:57:42 crc kubenswrapper[4637]: I1201 14:57:42.962884 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" event={"ID":"b46b721a-cd47-48db-b343-ea841d5ae9fc","Type":"ContainerDied","Data":"77443ddb9e4e10d4f70e249e4bd0ccddf762f1e91ec5c02e821835b5c2f55ac6"} Dec 01 14:57:43 crc kubenswrapper[4637]: I1201 14:57:43.972979 4637 generic.go:334] "Generic (PLEG): container finished" podID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerID="1fca7ba6f33592ff3f602bc98a4cf130447ed1ec5d8ef932509cde983045c12a" exitCode=0 Dec 01 14:57:43 crc kubenswrapper[4637]: I1201 14:57:43.973145 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" event={"ID":"b46b721a-cd47-48db-b343-ea841d5ae9fc","Type":"ContainerDied","Data":"1fca7ba6f33592ff3f602bc98a4cf130447ed1ec5d8ef932509cde983045c12a"} Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.212880 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.253985 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-util\") pod \"b46b721a-cd47-48db-b343-ea841d5ae9fc\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.254095 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-bundle\") pod \"b46b721a-cd47-48db-b343-ea841d5ae9fc\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.254361 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-684mt\" (UniqueName: \"kubernetes.io/projected/b46b721a-cd47-48db-b343-ea841d5ae9fc-kube-api-access-684mt\") pod \"b46b721a-cd47-48db-b343-ea841d5ae9fc\" (UID: \"b46b721a-cd47-48db-b343-ea841d5ae9fc\") " Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.256376 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-bundle" (OuterVolumeSpecName: "bundle") pod "b46b721a-cd47-48db-b343-ea841d5ae9fc" (UID: "b46b721a-cd47-48db-b343-ea841d5ae9fc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.263649 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46b721a-cd47-48db-b343-ea841d5ae9fc-kube-api-access-684mt" (OuterVolumeSpecName: "kube-api-access-684mt") pod "b46b721a-cd47-48db-b343-ea841d5ae9fc" (UID: "b46b721a-cd47-48db-b343-ea841d5ae9fc"). InnerVolumeSpecName "kube-api-access-684mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.272260 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-util" (OuterVolumeSpecName: "util") pod "b46b721a-cd47-48db-b343-ea841d5ae9fc" (UID: "b46b721a-cd47-48db-b343-ea841d5ae9fc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.356117 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-684mt\" (UniqueName: \"kubernetes.io/projected/b46b721a-cd47-48db-b343-ea841d5ae9fc-kube-api-access-684mt\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.356149 4637 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-util\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.356159 4637 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b46b721a-cd47-48db-b343-ea841d5ae9fc-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.613202 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.613271 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.988911 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" event={"ID":"b46b721a-cd47-48db-b343-ea841d5ae9fc","Type":"ContainerDied","Data":"95ef1febf4ca919ecb1acb56d0136db669d2043d04c0f2375c657bc8b7431209"} Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.988966 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ef1febf4ca919ecb1acb56d0136db669d2043d04c0f2375c657bc8b7431209" Dec 01 14:57:45 crc kubenswrapper[4637]: I1201 14:57:45.989021 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.439124 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v"] Dec 01 14:57:53 crc kubenswrapper[4637]: E1201 14:57:53.440193 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerName="extract" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.440211 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerName="extract" Dec 01 14:57:53 crc kubenswrapper[4637]: E1201 14:57:53.440225 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerName="pull" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.440231 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerName="pull" Dec 01 14:57:53 crc kubenswrapper[4637]: E1201 14:57:53.440247 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerName="util" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.440252 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerName="util" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.440363 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b46b721a-cd47-48db-b343-ea841d5ae9fc" containerName="extract" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.440814 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.447237 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.447481 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.453125 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.453394 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lbz29" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.453810 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.470287 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7d5\" (UniqueName: \"kubernetes.io/projected/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-kube-api-access-7c7d5\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.470359 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-webhook-cert\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.470647 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-apiservice-cert\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.542888 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v"] Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.572316 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-apiservice-cert\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.572396 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c7d5\" (UniqueName: \"kubernetes.io/projected/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-kube-api-access-7c7d5\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.572419 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-webhook-cert\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.580474 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-apiservice-cert\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.598752 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-webhook-cert\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.608016 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c7d5\" (UniqueName: \"kubernetes.io/projected/baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4-kube-api-access-7c7d5\") pod \"metallb-operator-controller-manager-8669bf5bd5-vcn5v\" (UID: \"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4\") " pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.724892 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k"] Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.725755 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.732676 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-rpv5d" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.733414 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.733553 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.748864 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k"] Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.768467 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.784033 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60286b73-70b9-46ce-8fca-28552760b79e-webhook-cert\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.784093 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60286b73-70b9-46ce-8fca-28552760b79e-apiservice-cert\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.784170 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngxv7\" (UniqueName: \"kubernetes.io/projected/60286b73-70b9-46ce-8fca-28552760b79e-kube-api-access-ngxv7\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.887666 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngxv7\" (UniqueName: \"kubernetes.io/projected/60286b73-70b9-46ce-8fca-28552760b79e-kube-api-access-ngxv7\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.887774 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60286b73-70b9-46ce-8fca-28552760b79e-webhook-cert\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.887813 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60286b73-70b9-46ce-8fca-28552760b79e-apiservice-cert\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.895476 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60286b73-70b9-46ce-8fca-28552760b79e-apiservice-cert\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.895951 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60286b73-70b9-46ce-8fca-28552760b79e-webhook-cert\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:53 crc kubenswrapper[4637]: I1201 14:57:53.920649 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngxv7\" (UniqueName: \"kubernetes.io/projected/60286b73-70b9-46ce-8fca-28552760b79e-kube-api-access-ngxv7\") pod \"metallb-operator-webhook-server-59b9f9d896-crh4k\" (UID: \"60286b73-70b9-46ce-8fca-28552760b79e\") " pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:54 crc kubenswrapper[4637]: I1201 14:57:54.043854 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:57:54 crc kubenswrapper[4637]: I1201 14:57:54.125779 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v"] Dec 01 14:57:54 crc kubenswrapper[4637]: W1201 14:57:54.153182 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaaff01d_29a0_44c3_8b9f_c8e8e3afd1f4.slice/crio-101219c00c88639fa73ed4fa93e29315d71d11df4daec140ed229d5a9c31f8bb WatchSource:0}: Error finding container 101219c00c88639fa73ed4fa93e29315d71d11df4daec140ed229d5a9c31f8bb: Status 404 returned error can't find the container with id 101219c00c88639fa73ed4fa93e29315d71d11df4daec140ed229d5a9c31f8bb Dec 01 14:57:54 crc kubenswrapper[4637]: I1201 14:57:54.426600 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k"] Dec 01 14:57:54 crc kubenswrapper[4637]: W1201 14:57:54.443708 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60286b73_70b9_46ce_8fca_28552760b79e.slice/crio-57e463d5083bea2b4a46f93614f9372bb1e98c466a8dc030ba8bbb19ee73cd5c WatchSource:0}: Error finding container 57e463d5083bea2b4a46f93614f9372bb1e98c466a8dc030ba8bbb19ee73cd5c: Status 404 returned error can't find the container with id 57e463d5083bea2b4a46f93614f9372bb1e98c466a8dc030ba8bbb19ee73cd5c Dec 01 14:57:55 crc kubenswrapper[4637]: I1201 14:57:55.056535 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" event={"ID":"60286b73-70b9-46ce-8fca-28552760b79e","Type":"ContainerStarted","Data":"57e463d5083bea2b4a46f93614f9372bb1e98c466a8dc030ba8bbb19ee73cd5c"} Dec 01 14:57:55 crc kubenswrapper[4637]: I1201 14:57:55.057692 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" event={"ID":"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4","Type":"ContainerStarted","Data":"101219c00c88639fa73ed4fa93e29315d71d11df4daec140ed229d5a9c31f8bb"} Dec 01 14:57:59 crc kubenswrapper[4637]: I1201 14:57:59.093833 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" event={"ID":"baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4","Type":"ContainerStarted","Data":"c810dbd437c8254e9f883add72b50df204177f1aea5ecaf3b1092f207a96c732"} Dec 01 14:57:59 crc kubenswrapper[4637]: I1201 14:57:59.100315 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:57:59 crc kubenswrapper[4637]: I1201 14:57:59.135070 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" podStartSLOduration=2.115196203 podStartE2EDuration="6.135042244s" podCreationTimestamp="2025-12-01 14:57:53 +0000 UTC" firstStartedPulling="2025-12-01 14:57:54.181654719 +0000 UTC m=+724.699363547" lastFinishedPulling="2025-12-01 14:57:58.20150076 +0000 UTC m=+728.719209588" observedRunningTime="2025-12-01 14:57:59.126418963 +0000 UTC m=+729.644127791" watchObservedRunningTime="2025-12-01 14:57:59.135042244 +0000 UTC m=+729.652751072" Dec 01 14:58:01 crc kubenswrapper[4637]: I1201 14:58:01.128271 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" event={"ID":"60286b73-70b9-46ce-8fca-28552760b79e","Type":"ContainerStarted","Data":"c05ab966dc51e0d73666859885a2e36f3a285a37d4a25e4ec057c2b418077864"} Dec 01 14:58:01 crc kubenswrapper[4637]: I1201 14:58:01.130168 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:58:01 crc kubenswrapper[4637]: I1201 14:58:01.154677 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" podStartSLOduration=1.949762126 podStartE2EDuration="8.154654826s" podCreationTimestamp="2025-12-01 14:57:53 +0000 UTC" firstStartedPulling="2025-12-01 14:57:54.447429549 +0000 UTC m=+724.965138377" lastFinishedPulling="2025-12-01 14:58:00.652322249 +0000 UTC m=+731.170031077" observedRunningTime="2025-12-01 14:58:01.153045402 +0000 UTC m=+731.670754230" watchObservedRunningTime="2025-12-01 14:58:01.154654826 +0000 UTC m=+731.672363654" Dec 01 14:58:14 crc kubenswrapper[4637]: I1201 14:58:14.061863 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-59b9f9d896-crh4k" Dec 01 14:58:15 crc kubenswrapper[4637]: I1201 14:58:15.614016 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:58:15 crc kubenswrapper[4637]: I1201 14:58:15.614594 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:58:25 crc kubenswrapper[4637]: I1201 14:58:25.921617 4637 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 14:58:33 crc kubenswrapper[4637]: I1201 14:58:33.780147 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8669bf5bd5-vcn5v" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.517529 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt"] Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.521305 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mxfwh"] Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.522523 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.524366 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.526008 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.526287 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.527394 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cd8tt" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.527850 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.559579 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt"] Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.564554 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wn7v\" (UniqueName: \"kubernetes.io/projected/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-kube-api-access-4wn7v\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.564601 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics-certs\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.564626 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-startup\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.564649 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-j76nt\" (UID: \"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.564668 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5nhn\" (UniqueName: \"kubernetes.io/projected/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-kube-api-access-n5nhn\") pod \"frr-k8s-webhook-server-7fcb986d4-j76nt\" (UID: \"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.564699 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.564821 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-conf\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.564960 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-reloader\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.565041 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-sockets\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.628586 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-r4rsx"] Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.629797 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.632455 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.632667 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bk5l4" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.635407 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.645233 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.654568 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-nwkx8"] Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.655841 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.658033 4637 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666427 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm5j9\" (UniqueName: \"kubernetes.io/projected/43335bc5-11b9-4763-bf18-efeaef24d35a-kube-api-access-rm5j9\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666474 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wn7v\" (UniqueName: \"kubernetes.io/projected/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-kube-api-access-4wn7v\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666509 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics-certs\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666529 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-startup\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666548 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-metrics-certs\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666565 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666587 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-j76nt\" (UID: \"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666607 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/43335bc5-11b9-4763-bf18-efeaef24d35a-metallb-excludel2\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666623 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5nhn\" (UniqueName: \"kubernetes.io/projected/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-kube-api-access-n5nhn\") pod \"frr-k8s-webhook-server-7fcb986d4-j76nt\" (UID: \"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666640 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-cert\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666659 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldcl\" (UniqueName: \"kubernetes.io/projected/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-kube-api-access-hldcl\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666682 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666702 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-conf\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666725 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-reloader\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666751 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-sockets\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.666775 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-metrics-certs\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.667235 4637 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.667307 4637 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.667567 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-conf\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.667329 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-cert podName:b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923 nodeName:}" failed. No retries permitted until 2025-12-01 14:58:35.167308117 +0000 UTC m=+765.685016945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-cert") pod "frr-k8s-webhook-server-7fcb986d4-j76nt" (UID: "b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923") : secret "frr-k8s-webhook-server-cert" not found Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.667636 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics-certs podName:44a26d22-db8c-4b4d-a6c7-286ebd0197c5 nodeName:}" failed. No retries permitted until 2025-12-01 14:58:35.167617405 +0000 UTC m=+765.685326233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics-certs") pod "frr-k8s-mxfwh" (UID: "44a26d22-db8c-4b4d-a6c7-286ebd0197c5") : secret "frr-k8s-certs-secret" not found Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.667740 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.667953 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-sockets\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.668313 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-reloader\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.668528 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-frr-startup\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.683315 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-nwkx8"] Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.693435 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5nhn\" (UniqueName: \"kubernetes.io/projected/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-kube-api-access-n5nhn\") pod \"frr-k8s-webhook-server-7fcb986d4-j76nt\" (UID: \"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.697150 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wn7v\" (UniqueName: \"kubernetes.io/projected/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-kube-api-access-4wn7v\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.768008 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm5j9\" (UniqueName: \"kubernetes.io/projected/43335bc5-11b9-4763-bf18-efeaef24d35a-kube-api-access-rm5j9\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.768099 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-metrics-certs\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.768120 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.768149 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/43335bc5-11b9-4763-bf18-efeaef24d35a-metallb-excludel2\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.768170 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-cert\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.768191 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldcl\" (UniqueName: \"kubernetes.io/projected/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-kube-api-access-hldcl\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.768238 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-metrics-certs\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.768360 4637 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.768433 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-metrics-certs podName:43335bc5-11b9-4763-bf18-efeaef24d35a nodeName:}" failed. No retries permitted until 2025-12-01 14:58:35.26841464 +0000 UTC m=+765.786123468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-metrics-certs") pod "speaker-r4rsx" (UID: "43335bc5-11b9-4763-bf18-efeaef24d35a") : secret "speaker-certs-secret" not found Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.768861 4637 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.768885 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-metrics-certs podName:fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d nodeName:}" failed. No retries permitted until 2025-12-01 14:58:35.268878112 +0000 UTC m=+765.786586940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-metrics-certs") pod "controller-f8648f98b-nwkx8" (UID: "fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d") : secret "controller-certs-secret" not found Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.768917 4637 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 14:58:34 crc kubenswrapper[4637]: E1201 14:58:34.768957 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist podName:43335bc5-11b9-4763-bf18-efeaef24d35a nodeName:}" failed. No retries permitted until 2025-12-01 14:58:35.268950174 +0000 UTC m=+765.786659002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist") pod "speaker-r4rsx" (UID: "43335bc5-11b9-4763-bf18-efeaef24d35a") : secret "metallb-memberlist" not found Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.769667 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/43335bc5-11b9-4763-bf18-efeaef24d35a-metallb-excludel2\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.774171 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-cert\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.793173 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldcl\" (UniqueName: \"kubernetes.io/projected/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-kube-api-access-hldcl\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:34 crc kubenswrapper[4637]: I1201 14:58:34.807629 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm5j9\" (UniqueName: \"kubernetes.io/projected/43335bc5-11b9-4763-bf18-efeaef24d35a-kube-api-access-rm5j9\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.174888 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics-certs\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.174968 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-j76nt\" (UID: \"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.178063 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-j76nt\" (UID: \"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.180156 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44a26d22-db8c-4b4d-a6c7-286ebd0197c5-metrics-certs\") pod \"frr-k8s-mxfwh\" (UID: \"44a26d22-db8c-4b4d-a6c7-286ebd0197c5\") " pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.275796 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-metrics-certs\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.275841 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.275905 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-metrics-certs\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:35 crc kubenswrapper[4637]: E1201 14:58:35.276131 4637 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 14:58:35 crc kubenswrapper[4637]: E1201 14:58:35.276267 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist podName:43335bc5-11b9-4763-bf18-efeaef24d35a nodeName:}" failed. No retries permitted until 2025-12-01 14:58:36.276240773 +0000 UTC m=+766.793949781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist") pod "speaker-r4rsx" (UID: "43335bc5-11b9-4763-bf18-efeaef24d35a") : secret "metallb-memberlist" not found Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.279087 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d-metrics-certs\") pod \"controller-f8648f98b-nwkx8\" (UID: \"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d\") " pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.279867 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-metrics-certs\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.447121 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.451786 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.568634 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:35 crc kubenswrapper[4637]: I1201 14:58:35.683588 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt"] Dec 01 14:58:35 crc kubenswrapper[4637]: W1201 14:58:35.705433 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9100ec0_5fe8_4ad1_bce7_40ca6cf5e923.slice/crio-49849d94f62142e5e89d087312e05de41434337919417ace2b8867f79b81f5ba WatchSource:0}: Error finding container 49849d94f62142e5e89d087312e05de41434337919417ace2b8867f79b81f5ba: Status 404 returned error can't find the container with id 49849d94f62142e5e89d087312e05de41434337919417ace2b8867f79b81f5ba Dec 01 14:58:36 crc kubenswrapper[4637]: W1201 14:58:36.020203 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9b7dc0_f1ec_41b0_91b9_11bef4d2e56d.slice/crio-ea6c56109b7e5b391aa1f23850054ed4f245035aed25cbcb4b816f6586be94d7 WatchSource:0}: Error finding container ea6c56109b7e5b391aa1f23850054ed4f245035aed25cbcb4b816f6586be94d7: Status 404 returned error can't find the container with id ea6c56109b7e5b391aa1f23850054ed4f245035aed25cbcb4b816f6586be94d7 Dec 01 14:58:36 crc kubenswrapper[4637]: I1201 14:58:36.023382 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-nwkx8"] Dec 01 14:58:36 crc kubenswrapper[4637]: I1201 14:58:36.293327 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:36 crc kubenswrapper[4637]: E1201 14:58:36.293582 4637 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 14:58:36 crc kubenswrapper[4637]: E1201 14:58:36.293660 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist podName:43335bc5-11b9-4763-bf18-efeaef24d35a nodeName:}" failed. No retries permitted until 2025-12-01 14:58:38.293632727 +0000 UTC m=+768.811341565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist") pod "speaker-r4rsx" (UID: "43335bc5-11b9-4763-bf18-efeaef24d35a") : secret "metallb-memberlist" not found Dec 01 14:58:36 crc kubenswrapper[4637]: I1201 14:58:36.409368 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-nwkx8" event={"ID":"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d","Type":"ContainerStarted","Data":"6f620173c970529b303e7f9fbf8561434391775e1ba4d2330fd46347cccc572f"} Dec 01 14:58:36 crc kubenswrapper[4637]: I1201 14:58:36.409422 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-nwkx8" event={"ID":"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d","Type":"ContainerStarted","Data":"f60fa8f71b67b455334212564e898621693a0d5a54a0c467ef11251d03ec61e7"} Dec 01 14:58:36 crc kubenswrapper[4637]: I1201 14:58:36.409434 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-nwkx8" event={"ID":"fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d","Type":"ContainerStarted","Data":"ea6c56109b7e5b391aa1f23850054ed4f245035aed25cbcb4b816f6586be94d7"} Dec 01 14:58:36 crc kubenswrapper[4637]: I1201 14:58:36.410684 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:36 crc kubenswrapper[4637]: I1201 14:58:36.412002 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" event={"ID":"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923","Type":"ContainerStarted","Data":"49849d94f62142e5e89d087312e05de41434337919417ace2b8867f79b81f5ba"} Dec 01 14:58:36 crc kubenswrapper[4637]: I1201 14:58:36.413375 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerStarted","Data":"537116e2694d9833960508a4ac8e3843a250bc35bb9ce0faa61344b3fb04b571"} Dec 01 14:58:38 crc kubenswrapper[4637]: I1201 14:58:38.323682 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:38 crc kubenswrapper[4637]: I1201 14:58:38.331219 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43335bc5-11b9-4763-bf18-efeaef24d35a-memberlist\") pod \"speaker-r4rsx\" (UID: \"43335bc5-11b9-4763-bf18-efeaef24d35a\") " pod="metallb-system/speaker-r4rsx" Dec 01 14:58:38 crc kubenswrapper[4637]: I1201 14:58:38.545055 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r4rsx" Dec 01 14:58:38 crc kubenswrapper[4637]: W1201 14:58:38.575469 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43335bc5_11b9_4763_bf18_efeaef24d35a.slice/crio-23c574d2f39042be1750a8745bda935a8db334fa86d9298fe1f12f7f8964d96a WatchSource:0}: Error finding container 23c574d2f39042be1750a8745bda935a8db334fa86d9298fe1f12f7f8964d96a: Status 404 returned error can't find the container with id 23c574d2f39042be1750a8745bda935a8db334fa86d9298fe1f12f7f8964d96a Dec 01 14:58:39 crc kubenswrapper[4637]: I1201 14:58:39.440338 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r4rsx" event={"ID":"43335bc5-11b9-4763-bf18-efeaef24d35a","Type":"ContainerStarted","Data":"8364add26eb009e16ccc17e671687431a93f5241db2e07785dbcb685a5614597"} Dec 01 14:58:39 crc kubenswrapper[4637]: I1201 14:58:39.440625 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r4rsx" event={"ID":"43335bc5-11b9-4763-bf18-efeaef24d35a","Type":"ContainerStarted","Data":"08d24028cb2e80f843e00e7f534b8a455d4da88f7b75ae66a05783f8f1f41a13"} Dec 01 14:58:39 crc kubenswrapper[4637]: I1201 14:58:39.440641 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r4rsx" event={"ID":"43335bc5-11b9-4763-bf18-efeaef24d35a","Type":"ContainerStarted","Data":"23c574d2f39042be1750a8745bda935a8db334fa86d9298fe1f12f7f8964d96a"} Dec 01 14:58:39 crc kubenswrapper[4637]: I1201 14:58:39.440847 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-r4rsx" Dec 01 14:58:39 crc kubenswrapper[4637]: I1201 14:58:39.464769 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-nwkx8" podStartSLOduration=5.464741849 podStartE2EDuration="5.464741849s" podCreationTimestamp="2025-12-01 14:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:58:36.436818798 +0000 UTC m=+766.954527626" watchObservedRunningTime="2025-12-01 14:58:39.464741849 +0000 UTC m=+769.982450677" Dec 01 14:58:39 crc kubenswrapper[4637]: I1201 14:58:39.467837 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-r4rsx" podStartSLOduration=5.467817851 podStartE2EDuration="5.467817851s" podCreationTimestamp="2025-12-01 14:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 14:58:39.465029566 +0000 UTC m=+769.982738394" watchObservedRunningTime="2025-12-01 14:58:39.467817851 +0000 UTC m=+769.985526679" Dec 01 14:58:44 crc kubenswrapper[4637]: I1201 14:58:44.483048 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" event={"ID":"b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923","Type":"ContainerStarted","Data":"ee4679cd2f439afe25be8a9ca8b0f7f1fc241db21dd098ad14d6646913dcb0fe"} Dec 01 14:58:44 crc kubenswrapper[4637]: I1201 14:58:44.485268 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:44 crc kubenswrapper[4637]: I1201 14:58:44.487898 4637 generic.go:334] "Generic (PLEG): container finished" podID="44a26d22-db8c-4b4d-a6c7-286ebd0197c5" containerID="a4e15a829768ae206c51b55a9c26d058eb99814a8638223dd020893f898e1eec" exitCode=0 Dec 01 14:58:44 crc kubenswrapper[4637]: I1201 14:58:44.487962 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerDied","Data":"a4e15a829768ae206c51b55a9c26d058eb99814a8638223dd020893f898e1eec"} Dec 01 14:58:44 crc kubenswrapper[4637]: I1201 14:58:44.549485 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" podStartSLOduration=2.716299597 podStartE2EDuration="10.549459328s" podCreationTimestamp="2025-12-01 14:58:34 +0000 UTC" firstStartedPulling="2025-12-01 14:58:35.707606556 +0000 UTC m=+766.225315384" lastFinishedPulling="2025-12-01 14:58:43.540766287 +0000 UTC m=+774.058475115" observedRunningTime="2025-12-01 14:58:44.520039409 +0000 UTC m=+775.037748237" watchObservedRunningTime="2025-12-01 14:58:44.549459328 +0000 UTC m=+775.067168156" Dec 01 14:58:45 crc kubenswrapper[4637]: I1201 14:58:45.496698 4637 generic.go:334] "Generic (PLEG): container finished" podID="44a26d22-db8c-4b4d-a6c7-286ebd0197c5" containerID="594ea2e05e0620f1f3d0d7a81bc75db4111e135e7c3d0d95121f02b1c2417b05" exitCode=0 Dec 01 14:58:45 crc kubenswrapper[4637]: I1201 14:58:45.496790 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerDied","Data":"594ea2e05e0620f1f3d0d7a81bc75db4111e135e7c3d0d95121f02b1c2417b05"} Dec 01 14:58:45 crc kubenswrapper[4637]: I1201 14:58:45.613734 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 14:58:45 crc kubenswrapper[4637]: I1201 14:58:45.614217 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 14:58:45 crc kubenswrapper[4637]: I1201 14:58:45.614315 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 14:58:45 crc kubenswrapper[4637]: I1201 14:58:45.615011 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f441d58a7fd53036d54f051f6c8a3463949b9941e99c7ef5c07b779f2546fa99"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 14:58:45 crc kubenswrapper[4637]: I1201 14:58:45.615138 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://f441d58a7fd53036d54f051f6c8a3463949b9941e99c7ef5c07b779f2546fa99" gracePeriod=600 Dec 01 14:58:46 crc kubenswrapper[4637]: I1201 14:58:46.512102 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="f441d58a7fd53036d54f051f6c8a3463949b9941e99c7ef5c07b779f2546fa99" exitCode=0 Dec 01 14:58:46 crc kubenswrapper[4637]: I1201 14:58:46.512195 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"f441d58a7fd53036d54f051f6c8a3463949b9941e99c7ef5c07b779f2546fa99"} Dec 01 14:58:46 crc kubenswrapper[4637]: I1201 14:58:46.512550 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"e979781fddd064342f7468d039fd5c3c7d452779d2cd7d5b9f3797e85de0bed3"} Dec 01 14:58:46 crc kubenswrapper[4637]: I1201 14:58:46.512620 4637 scope.go:117] "RemoveContainer" containerID="e755a6e309afd78aa4596b9c41707c84ca8de306749cc711e0f957364ca44ea0" Dec 01 14:58:46 crc kubenswrapper[4637]: I1201 14:58:46.515359 4637 generic.go:334] "Generic (PLEG): container finished" podID="44a26d22-db8c-4b4d-a6c7-286ebd0197c5" containerID="0db86225422fea82793f93c2adb9684305cce4e17ae0bd471dcff92de74c0b83" exitCode=0 Dec 01 14:58:46 crc kubenswrapper[4637]: I1201 14:58:46.515787 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerDied","Data":"0db86225422fea82793f93c2adb9684305cce4e17ae0bd471dcff92de74c0b83"} Dec 01 14:58:47 crc kubenswrapper[4637]: I1201 14:58:47.540654 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerStarted","Data":"b0142392e40518148550513a82285da15e23104553f773d99375a9dc9b12f949"} Dec 01 14:58:47 crc kubenswrapper[4637]: I1201 14:58:47.541890 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerStarted","Data":"303fbc64b0b7b9c66657e27d05125d3c42e95b18e6b7c180f2bc91468c83301c"} Dec 01 14:58:47 crc kubenswrapper[4637]: I1201 14:58:47.541998 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerStarted","Data":"ec4db113c7c2f939ea64cf859246bfa379229510c969012d66c809124fc9368c"} Dec 01 14:58:47 crc kubenswrapper[4637]: I1201 14:58:47.542061 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerStarted","Data":"3e67757f58f4600160c489fbe4adaa50cd4dd02797d236258f4303df7531099d"} Dec 01 14:58:47 crc kubenswrapper[4637]: I1201 14:58:47.542143 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerStarted","Data":"90d07adeac60af44b0a9d487cb8e022e375e45caba19f2f8e4532f088a3ba3b8"} Dec 01 14:58:48 crc kubenswrapper[4637]: I1201 14:58:48.549879 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-r4rsx" Dec 01 14:58:48 crc kubenswrapper[4637]: I1201 14:58:48.558438 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxfwh" event={"ID":"44a26d22-db8c-4b4d-a6c7-286ebd0197c5","Type":"ContainerStarted","Data":"d97c988d338d5de0fbafd6327f7b645f11f1f60d826f513267c47a0510d7107f"} Dec 01 14:58:48 crc kubenswrapper[4637]: I1201 14:58:48.559278 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:48 crc kubenswrapper[4637]: I1201 14:58:48.631964 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mxfwh" podStartSLOduration=6.723639101 podStartE2EDuration="14.631924078s" podCreationTimestamp="2025-12-01 14:58:34 +0000 UTC" firstStartedPulling="2025-12-01 14:58:35.610452099 +0000 UTC m=+766.128160927" lastFinishedPulling="2025-12-01 14:58:43.518737076 +0000 UTC m=+774.036445904" observedRunningTime="2025-12-01 14:58:48.610902805 +0000 UTC m=+779.128611633" watchObservedRunningTime="2025-12-01 14:58:48.631924078 +0000 UTC m=+779.149632906" Dec 01 14:58:50 crc kubenswrapper[4637]: I1201 14:58:50.452277 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:50 crc kubenswrapper[4637]: I1201 14:58:50.500731 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.369700 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mxps2"] Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.371859 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mxps2" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.374597 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.375007 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.376014 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qqzsd" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.397316 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mxps2"] Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.468021 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfmxl\" (UniqueName: \"kubernetes.io/projected/f84d0f19-20ff-470c-af40-92912a5e0516-kube-api-access-bfmxl\") pod \"openstack-operator-index-mxps2\" (UID: \"f84d0f19-20ff-470c-af40-92912a5e0516\") " pod="openstack-operators/openstack-operator-index-mxps2" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.569542 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfmxl\" (UniqueName: \"kubernetes.io/projected/f84d0f19-20ff-470c-af40-92912a5e0516-kube-api-access-bfmxl\") pod \"openstack-operator-index-mxps2\" (UID: \"f84d0f19-20ff-470c-af40-92912a5e0516\") " pod="openstack-operators/openstack-operator-index-mxps2" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.589781 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfmxl\" (UniqueName: \"kubernetes.io/projected/f84d0f19-20ff-470c-af40-92912a5e0516-kube-api-access-bfmxl\") pod \"openstack-operator-index-mxps2\" (UID: \"f84d0f19-20ff-470c-af40-92912a5e0516\") " pod="openstack-operators/openstack-operator-index-mxps2" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.694218 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mxps2" Dec 01 14:58:51 crc kubenswrapper[4637]: I1201 14:58:51.904715 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mxps2"] Dec 01 14:58:52 crc kubenswrapper[4637]: I1201 14:58:52.582148 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mxps2" event={"ID":"f84d0f19-20ff-470c-af40-92912a5e0516","Type":"ContainerStarted","Data":"9f40e6ae7034507b6ccd2e1dab75e19d6b7e1c3fb65d85f47f0e7285afd3f4ee"} Dec 01 14:58:54 crc kubenswrapper[4637]: I1201 14:58:54.599903 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mxps2" event={"ID":"f84d0f19-20ff-470c-af40-92912a5e0516","Type":"ContainerStarted","Data":"bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91"} Dec 01 14:58:54 crc kubenswrapper[4637]: I1201 14:58:54.622974 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mxps2" podStartSLOduration=1.3063671989999999 podStartE2EDuration="3.622926435s" podCreationTimestamp="2025-12-01 14:58:51 +0000 UTC" firstStartedPulling="2025-12-01 14:58:51.913826892 +0000 UTC m=+782.431535720" lastFinishedPulling="2025-12-01 14:58:54.230386128 +0000 UTC m=+784.748094956" observedRunningTime="2025-12-01 14:58:54.621462255 +0000 UTC m=+785.139171093" watchObservedRunningTime="2025-12-01 14:58:54.622926435 +0000 UTC m=+785.140635263" Dec 01 14:58:54 crc kubenswrapper[4637]: I1201 14:58:54.741294 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mxps2"] Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.348497 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6dtxv"] Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.351054 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.389622 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6dtxv"] Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.430074 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm8kb\" (UniqueName: \"kubernetes.io/projected/fa9cd1f8-d36a-4263-b944-8594a42fe15f-kube-api-access-nm8kb\") pod \"openstack-operator-index-6dtxv\" (UID: \"fa9cd1f8-d36a-4263-b944-8594a42fe15f\") " pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.454041 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-j76nt" Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.533019 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm8kb\" (UniqueName: \"kubernetes.io/projected/fa9cd1f8-d36a-4263-b944-8594a42fe15f-kube-api-access-nm8kb\") pod \"openstack-operator-index-6dtxv\" (UID: \"fa9cd1f8-d36a-4263-b944-8594a42fe15f\") " pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.556534 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm8kb\" (UniqueName: \"kubernetes.io/projected/fa9cd1f8-d36a-4263-b944-8594a42fe15f-kube-api-access-nm8kb\") pod \"openstack-operator-index-6dtxv\" (UID: \"fa9cd1f8-d36a-4263-b944-8594a42fe15f\") " pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.574604 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-nwkx8" Dec 01 14:58:55 crc kubenswrapper[4637]: I1201 14:58:55.698237 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:58:56 crc kubenswrapper[4637]: I1201 14:58:56.181840 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6dtxv"] Dec 01 14:58:56 crc kubenswrapper[4637]: I1201 14:58:56.616698 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6dtxv" event={"ID":"fa9cd1f8-d36a-4263-b944-8594a42fe15f","Type":"ContainerStarted","Data":"0dcc8c0b7a39be4432f586db3cac672deec0a02c8fd5ec6e060ff9b25a8fb204"} Dec 01 14:58:56 crc kubenswrapper[4637]: I1201 14:58:56.617718 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6dtxv" event={"ID":"fa9cd1f8-d36a-4263-b944-8594a42fe15f","Type":"ContainerStarted","Data":"1fd08416877afb9dd81943c09d1d66d8a797e061313779b29273af28226b951c"} Dec 01 14:58:56 crc kubenswrapper[4637]: I1201 14:58:56.616775 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mxps2" podUID="f84d0f19-20ff-470c-af40-92912a5e0516" containerName="registry-server" containerID="cri-o://bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91" gracePeriod=2 Dec 01 14:58:56 crc kubenswrapper[4637]: I1201 14:58:56.644705 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6dtxv" podStartSLOduration=1.581231439 podStartE2EDuration="1.64468408s" podCreationTimestamp="2025-12-01 14:58:55 +0000 UTC" firstStartedPulling="2025-12-01 14:58:56.197317666 +0000 UTC m=+786.715026494" lastFinishedPulling="2025-12-01 14:58:56.260770287 +0000 UTC m=+786.778479135" observedRunningTime="2025-12-01 14:58:56.644336271 +0000 UTC m=+787.162045099" watchObservedRunningTime="2025-12-01 14:58:56.64468408 +0000 UTC m=+787.162392908" Dec 01 14:58:56 crc kubenswrapper[4637]: I1201 14:58:56.972463 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mxps2" Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.066172 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfmxl\" (UniqueName: \"kubernetes.io/projected/f84d0f19-20ff-470c-af40-92912a5e0516-kube-api-access-bfmxl\") pod \"f84d0f19-20ff-470c-af40-92912a5e0516\" (UID: \"f84d0f19-20ff-470c-af40-92912a5e0516\") " Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.076601 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84d0f19-20ff-470c-af40-92912a5e0516-kube-api-access-bfmxl" (OuterVolumeSpecName: "kube-api-access-bfmxl") pod "f84d0f19-20ff-470c-af40-92912a5e0516" (UID: "f84d0f19-20ff-470c-af40-92912a5e0516"). InnerVolumeSpecName "kube-api-access-bfmxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.168833 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfmxl\" (UniqueName: \"kubernetes.io/projected/f84d0f19-20ff-470c-af40-92912a5e0516-kube-api-access-bfmxl\") on node \"crc\" DevicePath \"\"" Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.626826 4637 generic.go:334] "Generic (PLEG): container finished" podID="f84d0f19-20ff-470c-af40-92912a5e0516" containerID="bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91" exitCode=0 Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.626954 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mxps2" Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.626997 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mxps2" event={"ID":"f84d0f19-20ff-470c-af40-92912a5e0516","Type":"ContainerDied","Data":"bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91"} Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.627085 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mxps2" event={"ID":"f84d0f19-20ff-470c-af40-92912a5e0516","Type":"ContainerDied","Data":"9f40e6ae7034507b6ccd2e1dab75e19d6b7e1c3fb65d85f47f0e7285afd3f4ee"} Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.627123 4637 scope.go:117] "RemoveContainer" containerID="bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91" Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.652698 4637 scope.go:117] "RemoveContainer" containerID="bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91" Dec 01 14:58:57 crc kubenswrapper[4637]: E1201 14:58:57.656178 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91\": container with ID starting with bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91 not found: ID does not exist" containerID="bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91" Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.656224 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91"} err="failed to get container status \"bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91\": rpc error: code = NotFound desc = could not find container \"bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91\": container with ID starting with bd2216f0a650f2ae7ea7046440817088a9d86d40778ba390b25e6e9c820ddf91 not found: ID does not exist" Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.665962 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mxps2"] Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.672337 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mxps2"] Dec 01 14:58:57 crc kubenswrapper[4637]: I1201 14:58:57.782321 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f84d0f19-20ff-470c-af40-92912a5e0516" path="/var/lib/kubelet/pods/f84d0f19-20ff-470c-af40-92912a5e0516/volumes" Dec 01 14:59:05 crc kubenswrapper[4637]: I1201 14:59:05.464016 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mxfwh" Dec 01 14:59:05 crc kubenswrapper[4637]: I1201 14:59:05.699306 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:59:05 crc kubenswrapper[4637]: I1201 14:59:05.699370 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:59:05 crc kubenswrapper[4637]: I1201 14:59:05.736563 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:59:06 crc kubenswrapper[4637]: I1201 14:59:06.722238 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6dtxv" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.574748 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp"] Dec 01 14:59:13 crc kubenswrapper[4637]: E1201 14:59:13.576185 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84d0f19-20ff-470c-af40-92912a5e0516" containerName="registry-server" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.576216 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84d0f19-20ff-470c-af40-92912a5e0516" containerName="registry-server" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.576446 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84d0f19-20ff-470c-af40-92912a5e0516" containerName="registry-server" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.578165 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.586014 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkqqq\" (UniqueName: \"kubernetes.io/projected/ae244c42-651d-4f31-9639-ba005da6ccc9-kube-api-access-qkqqq\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.586224 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-bundle\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.586337 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-util\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.598080 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fv7l5" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.612835 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp"] Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.686748 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-bundle\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.686820 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-util\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.686868 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkqqq\" (UniqueName: \"kubernetes.io/projected/ae244c42-651d-4f31-9639-ba005da6ccc9-kube-api-access-qkqqq\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.687522 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-bundle\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.690081 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-util\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.712196 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkqqq\" (UniqueName: \"kubernetes.io/projected/ae244c42-651d-4f31-9639-ba005da6ccc9-kube-api-access-qkqqq\") pod \"1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:13 crc kubenswrapper[4637]: I1201 14:59:13.915894 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:14 crc kubenswrapper[4637]: I1201 14:59:14.105036 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp"] Dec 01 14:59:14 crc kubenswrapper[4637]: W1201 14:59:14.109600 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae244c42_651d_4f31_9639_ba005da6ccc9.slice/crio-7374bce1b1eb51514601a8a8db7388848ce73ac926924be6b210a1280346c529 WatchSource:0}: Error finding container 7374bce1b1eb51514601a8a8db7388848ce73ac926924be6b210a1280346c529: Status 404 returned error can't find the container with id 7374bce1b1eb51514601a8a8db7388848ce73ac926924be6b210a1280346c529 Dec 01 14:59:14 crc kubenswrapper[4637]: I1201 14:59:14.754819 4637 generic.go:334] "Generic (PLEG): container finished" podID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerID="84aa25c85dbc257412c746e1304eb3de86a44ab42ad744fd60fb57a2d564ce59" exitCode=0 Dec 01 14:59:14 crc kubenswrapper[4637]: I1201 14:59:14.754913 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" event={"ID":"ae244c42-651d-4f31-9639-ba005da6ccc9","Type":"ContainerDied","Data":"84aa25c85dbc257412c746e1304eb3de86a44ab42ad744fd60fb57a2d564ce59"} Dec 01 14:59:14 crc kubenswrapper[4637]: I1201 14:59:14.754973 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" event={"ID":"ae244c42-651d-4f31-9639-ba005da6ccc9","Type":"ContainerStarted","Data":"7374bce1b1eb51514601a8a8db7388848ce73ac926924be6b210a1280346c529"} Dec 01 14:59:15 crc kubenswrapper[4637]: I1201 14:59:15.764244 4637 generic.go:334] "Generic (PLEG): container finished" podID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerID="b67bd838d46a352932fa6390b72d15e808f6a38328ab29e543e17817b18378c9" exitCode=0 Dec 01 14:59:15 crc kubenswrapper[4637]: I1201 14:59:15.764526 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" event={"ID":"ae244c42-651d-4f31-9639-ba005da6ccc9","Type":"ContainerDied","Data":"b67bd838d46a352932fa6390b72d15e808f6a38328ab29e543e17817b18378c9"} Dec 01 14:59:16 crc kubenswrapper[4637]: I1201 14:59:16.775851 4637 generic.go:334] "Generic (PLEG): container finished" podID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerID="ca2e139d1020cec69846f4362420e2d30c80e6030de8cf9b294fd11071cde3cd" exitCode=0 Dec 01 14:59:16 crc kubenswrapper[4637]: I1201 14:59:16.775910 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" event={"ID":"ae244c42-651d-4f31-9639-ba005da6ccc9","Type":"ContainerDied","Data":"ca2e139d1020cec69846f4362420e2d30c80e6030de8cf9b294fd11071cde3cd"} Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.111566 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.257007 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-bundle\") pod \"ae244c42-651d-4f31-9639-ba005da6ccc9\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.257116 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-util\") pod \"ae244c42-651d-4f31-9639-ba005da6ccc9\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.257174 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkqqq\" (UniqueName: \"kubernetes.io/projected/ae244c42-651d-4f31-9639-ba005da6ccc9-kube-api-access-qkqqq\") pod \"ae244c42-651d-4f31-9639-ba005da6ccc9\" (UID: \"ae244c42-651d-4f31-9639-ba005da6ccc9\") " Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.258123 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-bundle" (OuterVolumeSpecName: "bundle") pod "ae244c42-651d-4f31-9639-ba005da6ccc9" (UID: "ae244c42-651d-4f31-9639-ba005da6ccc9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.263105 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae244c42-651d-4f31-9639-ba005da6ccc9-kube-api-access-qkqqq" (OuterVolumeSpecName: "kube-api-access-qkqqq") pod "ae244c42-651d-4f31-9639-ba005da6ccc9" (UID: "ae244c42-651d-4f31-9639-ba005da6ccc9"). InnerVolumeSpecName "kube-api-access-qkqqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.273849 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-util" (OuterVolumeSpecName: "util") pod "ae244c42-651d-4f31-9639-ba005da6ccc9" (UID: "ae244c42-651d-4f31-9639-ba005da6ccc9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.358465 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkqqq\" (UniqueName: \"kubernetes.io/projected/ae244c42-651d-4f31-9639-ba005da6ccc9-kube-api-access-qkqqq\") on node \"crc\" DevicePath \"\"" Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.358514 4637 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.358525 4637 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae244c42-651d-4f31-9639-ba005da6ccc9-util\") on node \"crc\" DevicePath \"\"" Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.806250 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" event={"ID":"ae244c42-651d-4f31-9639-ba005da6ccc9","Type":"ContainerDied","Data":"7374bce1b1eb51514601a8a8db7388848ce73ac926924be6b210a1280346c529"} Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.806610 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7374bce1b1eb51514601a8a8db7388848ce73ac926924be6b210a1280346c529" Dec 01 14:59:18 crc kubenswrapper[4637]: I1201 14:59:18.806351 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.134689 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8"] Dec 01 14:59:26 crc kubenswrapper[4637]: E1201 14:59:26.135661 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerName="util" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.135677 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerName="util" Dec 01 14:59:26 crc kubenswrapper[4637]: E1201 14:59:26.135695 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerName="extract" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.135703 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerName="extract" Dec 01 14:59:26 crc kubenswrapper[4637]: E1201 14:59:26.135717 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerName="pull" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.135725 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerName="pull" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.135871 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae244c42-651d-4f31-9639-ba005da6ccc9" containerName="extract" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.136698 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.140401 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-wpzn9" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.176398 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8"] Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.184835 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4gt\" (UniqueName: \"kubernetes.io/projected/2d479f26-6243-4089-9fcd-1821d05cf3f4-kube-api-access-2g4gt\") pod \"openstack-operator-controller-operator-5bf66cbd54-q5fc8\" (UID: \"2d479f26-6243-4089-9fcd-1821d05cf3f4\") " pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.286715 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g4gt\" (UniqueName: \"kubernetes.io/projected/2d479f26-6243-4089-9fcd-1821d05cf3f4-kube-api-access-2g4gt\") pod \"openstack-operator-controller-operator-5bf66cbd54-q5fc8\" (UID: \"2d479f26-6243-4089-9fcd-1821d05cf3f4\") " pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.314699 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g4gt\" (UniqueName: \"kubernetes.io/projected/2d479f26-6243-4089-9fcd-1821d05cf3f4-kube-api-access-2g4gt\") pod \"openstack-operator-controller-operator-5bf66cbd54-q5fc8\" (UID: \"2d479f26-6243-4089-9fcd-1821d05cf3f4\") " pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.458969 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.743740 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8"] Dec 01 14:59:26 crc kubenswrapper[4637]: I1201 14:59:26.860292 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" event={"ID":"2d479f26-6243-4089-9fcd-1821d05cf3f4","Type":"ContainerStarted","Data":"2e4d9e50a43e7e63e7c24b19b89d8f3026cfc6be0dcf03100b2410cde915a9ef"} Dec 01 14:59:31 crc kubenswrapper[4637]: I1201 14:59:31.898520 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" event={"ID":"2d479f26-6243-4089-9fcd-1821d05cf3f4","Type":"ContainerStarted","Data":"90fd891a033d48aa67de39268c2454a80185bf46c3b9530da5db599c7e8c0258"} Dec 01 14:59:35 crc kubenswrapper[4637]: I1201 14:59:35.932307 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" event={"ID":"2d479f26-6243-4089-9fcd-1821d05cf3f4","Type":"ContainerStarted","Data":"c0aa40fc6d3ceef0f3b4dff195f869bf801f535ced299bb73dc2b19c06a46aac"} Dec 01 14:59:35 crc kubenswrapper[4637]: I1201 14:59:35.933223 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" Dec 01 14:59:35 crc kubenswrapper[4637]: I1201 14:59:35.972809 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" podStartSLOduration=1.688695708 podStartE2EDuration="9.972789035s" podCreationTimestamp="2025-12-01 14:59:26 +0000 UTC" firstStartedPulling="2025-12-01 14:59:26.761523643 +0000 UTC m=+817.279232471" lastFinishedPulling="2025-12-01 14:59:35.04561698 +0000 UTC m=+825.563325798" observedRunningTime="2025-12-01 14:59:35.970811002 +0000 UTC m=+826.488519850" watchObservedRunningTime="2025-12-01 14:59:35.972789035 +0000 UTC m=+826.490497883" Dec 01 14:59:36 crc kubenswrapper[4637]: I1201 14:59:36.462521 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5bf66cbd54-q5fc8" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.183842 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw"] Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.185855 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.200911 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw"] Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.204104 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.204187 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.217952 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-config-volume\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.218328 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xk2g\" (UniqueName: \"kubernetes.io/projected/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-kube-api-access-5xk2g\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.218508 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-secret-volume\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.319335 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-config-volume\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.319390 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xk2g\" (UniqueName: \"kubernetes.io/projected/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-kube-api-access-5xk2g\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.319575 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-secret-volume\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.320434 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-config-volume\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.333204 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-secret-volume\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.342158 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xk2g\" (UniqueName: \"kubernetes.io/projected/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-kube-api-access-5xk2g\") pod \"collect-profiles-29410020-kd9hw\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.508529 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:00 crc kubenswrapper[4637]: I1201 15:00:00.743292 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw"] Dec 01 15:00:01 crc kubenswrapper[4637]: I1201 15:00:01.114787 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" event={"ID":"e8b6a7cd-467f-437c-9fd5-b840f2aa6504","Type":"ContainerStarted","Data":"5312d8347ddd08bae8df0d3268b87ff146e6dc12fd345749264685a0b63e07eb"} Dec 01 15:00:02 crc kubenswrapper[4637]: I1201 15:00:02.124730 4637 generic.go:334] "Generic (PLEG): container finished" podID="e8b6a7cd-467f-437c-9fd5-b840f2aa6504" containerID="44e7fa27302de930c3554e7e2ca2cc02d72e060d59da38b07def371b14dfe028" exitCode=0 Dec 01 15:00:02 crc kubenswrapper[4637]: I1201 15:00:02.124793 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" event={"ID":"e8b6a7cd-467f-437c-9fd5-b840f2aa6504","Type":"ContainerDied","Data":"44e7fa27302de930c3554e7e2ca2cc02d72e060d59da38b07def371b14dfe028"} Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.530332 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.566583 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-config-volume\") pod \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.566772 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-secret-volume\") pod \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.566855 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xk2g\" (UniqueName: \"kubernetes.io/projected/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-kube-api-access-5xk2g\") pod \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\" (UID: \"e8b6a7cd-467f-437c-9fd5-b840f2aa6504\") " Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.567132 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8b6a7cd-467f-437c-9fd5-b840f2aa6504" (UID: "e8b6a7cd-467f-437c-9fd5-b840f2aa6504"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.567253 4637 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.572122 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8b6a7cd-467f-437c-9fd5-b840f2aa6504" (UID: "e8b6a7cd-467f-437c-9fd5-b840f2aa6504"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.577707 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-kube-api-access-5xk2g" (OuterVolumeSpecName: "kube-api-access-5xk2g") pod "e8b6a7cd-467f-437c-9fd5-b840f2aa6504" (UID: "e8b6a7cd-467f-437c-9fd5-b840f2aa6504"). InnerVolumeSpecName "kube-api-access-5xk2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.669132 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xk2g\" (UniqueName: \"kubernetes.io/projected/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-kube-api-access-5xk2g\") on node \"crc\" DevicePath \"\"" Dec 01 15:00:03 crc kubenswrapper[4637]: I1201 15:00:03.669176 4637 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b6a7cd-467f-437c-9fd5-b840f2aa6504-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 15:00:04 crc kubenswrapper[4637]: I1201 15:00:04.140130 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" event={"ID":"e8b6a7cd-467f-437c-9fd5-b840f2aa6504","Type":"ContainerDied","Data":"5312d8347ddd08bae8df0d3268b87ff146e6dc12fd345749264685a0b63e07eb"} Dec 01 15:00:04 crc kubenswrapper[4637]: I1201 15:00:04.140170 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw" Dec 01 15:00:04 crc kubenswrapper[4637]: I1201 15:00:04.140181 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5312d8347ddd08bae8df0d3268b87ff146e6dc12fd345749264685a0b63e07eb" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.840728 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9"] Dec 01 15:00:08 crc kubenswrapper[4637]: E1201 15:00:08.841328 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b6a7cd-467f-437c-9fd5-b840f2aa6504" containerName="collect-profiles" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.841342 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b6a7cd-467f-437c-9fd5-b840f2aa6504" containerName="collect-profiles" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.841456 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b6a7cd-467f-437c-9fd5-b840f2aa6504" containerName="collect-profiles" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.842172 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.844831 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-hgksr" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.860119 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jsvx\" (UniqueName: \"kubernetes.io/projected/e3fe5f37-3b9c-4d1d-9890-920cfaad9b36-kube-api-access-9jsvx\") pod \"barbican-operator-controller-manager-5bfbbb859d-xm2c9\" (UID: \"e3fe5f37-3b9c-4d1d-9890-920cfaad9b36\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.874192 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9"] Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.896197 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55"] Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.897581 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.901409 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2"] Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.902510 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.903518 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lxpvg" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.904236 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6zdmd" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.925759 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm"] Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.927217 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.931515 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hfgl9" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.955060 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55"] Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.965825 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mlmv\" (UniqueName: \"kubernetes.io/projected/9a6330bc-2072-40b9-a81b-00d532b6b804-kube-api-access-9mlmv\") pod \"glance-operator-controller-manager-6bd966bbd4-4c7cm\" (UID: \"9a6330bc-2072-40b9-a81b-00d532b6b804\") " pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.965883 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4d95\" (UniqueName: \"kubernetes.io/projected/612d1951-263e-4d58-a3ab-94f8b2ddcb68-kube-api-access-v4d95\") pod \"cinder-operator-controller-manager-748967c98-7t5h2\" (UID: \"612d1951-263e-4d58-a3ab-94f8b2ddcb68\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.965974 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jsvx\" (UniqueName: \"kubernetes.io/projected/e3fe5f37-3b9c-4d1d-9890-920cfaad9b36-kube-api-access-9jsvx\") pod \"barbican-operator-controller-manager-5bfbbb859d-xm2c9\" (UID: \"e3fe5f37-3b9c-4d1d-9890-920cfaad9b36\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.966037 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9sdv\" (UniqueName: \"kubernetes.io/projected/e552181e-b9e1-43f4-825f-649923e52631-kube-api-access-l9sdv\") pod \"designate-operator-controller-manager-6788cc6d75-ngs55\" (UID: \"e552181e-b9e1-43f4-825f-649923e52631\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.971780 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2"] Dec 01 15:00:08 crc kubenswrapper[4637]: I1201 15:00:08.994025 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.016585 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.017884 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.034329 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-c5cdv" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.049105 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jsvx\" (UniqueName: \"kubernetes.io/projected/e3fe5f37-3b9c-4d1d-9890-920cfaad9b36-kube-api-access-9jsvx\") pod \"barbican-operator-controller-manager-5bfbbb859d-xm2c9\" (UID: \"e3fe5f37-3b9c-4d1d-9890-920cfaad9b36\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.067844 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mlmv\" (UniqueName: \"kubernetes.io/projected/9a6330bc-2072-40b9-a81b-00d532b6b804-kube-api-access-9mlmv\") pod \"glance-operator-controller-manager-6bd966bbd4-4c7cm\" (UID: \"9a6330bc-2072-40b9-a81b-00d532b6b804\") " pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.067888 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4d95\" (UniqueName: \"kubernetes.io/projected/612d1951-263e-4d58-a3ab-94f8b2ddcb68-kube-api-access-v4d95\") pod \"cinder-operator-controller-manager-748967c98-7t5h2\" (UID: \"612d1951-263e-4d58-a3ab-94f8b2ddcb68\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.067960 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9sdv\" (UniqueName: \"kubernetes.io/projected/e552181e-b9e1-43f4-825f-649923e52631-kube-api-access-l9sdv\") pod \"designate-operator-controller-manager-6788cc6d75-ngs55\" (UID: \"e552181e-b9e1-43f4-825f-649923e52631\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.069253 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.105868 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9sdv\" (UniqueName: \"kubernetes.io/projected/e552181e-b9e1-43f4-825f-649923e52631-kube-api-access-l9sdv\") pod \"designate-operator-controller-manager-6788cc6d75-ngs55\" (UID: \"e552181e-b9e1-43f4-825f-649923e52631\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.105982 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.107355 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.108060 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4d95\" (UniqueName: \"kubernetes.io/projected/612d1951-263e-4d58-a3ab-94f8b2ddcb68-kube-api-access-v4d95\") pod \"cinder-operator-controller-manager-748967c98-7t5h2\" (UID: \"612d1951-263e-4d58-a3ab-94f8b2ddcb68\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.114382 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-b8ghs" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.130648 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mlmv\" (UniqueName: \"kubernetes.io/projected/9a6330bc-2072-40b9-a81b-00d532b6b804-kube-api-access-9mlmv\") pod \"glance-operator-controller-manager-6bd966bbd4-4c7cm\" (UID: \"9a6330bc-2072-40b9-a81b-00d532b6b804\") " pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.139018 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.154700 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.163753 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.164098 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.165010 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.166056 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.166644 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jgsgf" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.166684 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.169879 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnzdf\" (UniqueName: \"kubernetes.io/projected/df23c8f8-8046-4e98-a46b-cc7c829981b9-kube-api-access-dnzdf\") pod \"heat-operator-controller-manager-698d6fd7d6-4qp7b\" (UID: \"df23c8f8-8046-4e98-a46b-cc7c829981b9\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.174514 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6jkcv" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.225755 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.244899 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.245586 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.255054 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.306147 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.308439 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtddx\" (UniqueName: \"kubernetes.io/projected/60365f73-6418-4fdc-901b-07a2321fdcf3-kube-api-access-jtddx\") pod \"horizon-operator-controller-manager-7d5d9fd47f-njp6w\" (UID: \"60365f73-6418-4fdc-901b-07a2321fdcf3\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.311652 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.336950 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert\") pod \"infra-operator-controller-manager-577c5f6d94-kx7cf\" (UID: \"9235c8f6-6738-496d-a945-42ba5d15afd2\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.345266 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zn6\" (UniqueName: \"kubernetes.io/projected/9235c8f6-6738-496d-a945-42ba5d15afd2-kube-api-access-z9zn6\") pod \"infra-operator-controller-manager-577c5f6d94-kx7cf\" (UID: \"9235c8f6-6738-496d-a945-42ba5d15afd2\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.345444 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnzdf\" (UniqueName: \"kubernetes.io/projected/df23c8f8-8046-4e98-a46b-cc7c829981b9-kube-api-access-dnzdf\") pod \"heat-operator-controller-manager-698d6fd7d6-4qp7b\" (UID: \"df23c8f8-8046-4e98-a46b-cc7c829981b9\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.345546 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26sn5\" (UniqueName: \"kubernetes.io/projected/ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b-kube-api-access-26sn5\") pod \"ironic-operator-controller-manager-54485f899-c8xnq\" (UID: \"ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.372474 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.402490 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kp957" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.430970 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.447640 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnzdf\" (UniqueName: \"kubernetes.io/projected/df23c8f8-8046-4e98-a46b-cc7c829981b9-kube-api-access-dnzdf\") pod \"heat-operator-controller-manager-698d6fd7d6-4qp7b\" (UID: \"df23c8f8-8046-4e98-a46b-cc7c829981b9\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.457276 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zn6\" (UniqueName: \"kubernetes.io/projected/9235c8f6-6738-496d-a945-42ba5d15afd2-kube-api-access-z9zn6\") pod \"infra-operator-controller-manager-577c5f6d94-kx7cf\" (UID: \"9235c8f6-6738-496d-a945-42ba5d15afd2\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.457418 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kw2n\" (UniqueName: \"kubernetes.io/projected/68dbc1ea-c95b-48b1-a4a3-542c87f531ac-kube-api-access-4kw2n\") pod \"keystone-operator-controller-manager-7d6f5d799-p5htp\" (UID: \"68dbc1ea-c95b-48b1-a4a3-542c87f531ac\") " pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.457504 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26sn5\" (UniqueName: \"kubernetes.io/projected/ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b-kube-api-access-26sn5\") pod \"ironic-operator-controller-manager-54485f899-c8xnq\" (UID: \"ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.457624 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtddx\" (UniqueName: \"kubernetes.io/projected/60365f73-6418-4fdc-901b-07a2321fdcf3-kube-api-access-jtddx\") pod \"horizon-operator-controller-manager-7d5d9fd47f-njp6w\" (UID: \"60365f73-6418-4fdc-901b-07a2321fdcf3\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.457717 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert\") pod \"infra-operator-controller-manager-577c5f6d94-kx7cf\" (UID: \"9235c8f6-6738-496d-a945-42ba5d15afd2\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:09 crc kubenswrapper[4637]: E1201 15:00:09.488594 4637 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 15:00:09 crc kubenswrapper[4637]: E1201 15:00:09.488779 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert podName:9235c8f6-6738-496d-a945-42ba5d15afd2 nodeName:}" failed. No retries permitted until 2025-12-01 15:00:09.988756558 +0000 UTC m=+860.506465386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert") pod "infra-operator-controller-manager-577c5f6d94-kx7cf" (UID: "9235c8f6-6738-496d-a945-42ba5d15afd2") : secret "infra-operator-webhook-server-cert" not found Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.499429 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.512533 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.523240 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2kqrn" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.532094 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.533538 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.547842 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xdr9k" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.561317 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kw2n\" (UniqueName: \"kubernetes.io/projected/68dbc1ea-c95b-48b1-a4a3-542c87f531ac-kube-api-access-4kw2n\") pod \"keystone-operator-controller-manager-7d6f5d799-p5htp\" (UID: \"68dbc1ea-c95b-48b1-a4a3-542c87f531ac\") " pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.561355 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8h8\" (UniqueName: \"kubernetes.io/projected/0c561b38-c3aa-492a-bcec-9c471c3fbf0b-kube-api-access-fz8h8\") pod \"manila-operator-controller-manager-646fd589f9-4h7xc\" (UID: \"0c561b38-c3aa-492a-bcec-9c471c3fbf0b\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.561395 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h95d\" (UniqueName: \"kubernetes.io/projected/641c4df0-62e4-4b62-8f75-60e49bb56f7a-kube-api-access-2h95d\") pod \"mariadb-operator-controller-manager-64d7c556cd-ng9gk\" (UID: \"641c4df0-62e4-4b62-8f75-60e49bb56f7a\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.562488 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zn6\" (UniqueName: \"kubernetes.io/projected/9235c8f6-6738-496d-a945-42ba5d15afd2-kube-api-access-z9zn6\") pod \"infra-operator-controller-manager-577c5f6d94-kx7cf\" (UID: \"9235c8f6-6738-496d-a945-42ba5d15afd2\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.607986 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtddx\" (UniqueName: \"kubernetes.io/projected/60365f73-6418-4fdc-901b-07a2321fdcf3-kube-api-access-jtddx\") pod \"horizon-operator-controller-manager-7d5d9fd47f-njp6w\" (UID: \"60365f73-6418-4fdc-901b-07a2321fdcf3\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.608440 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26sn5\" (UniqueName: \"kubernetes.io/projected/ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b-kube-api-access-26sn5\") pod \"ironic-operator-controller-manager-54485f899-c8xnq\" (UID: \"ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.618486 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.639685 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kw2n\" (UniqueName: \"kubernetes.io/projected/68dbc1ea-c95b-48b1-a4a3-542c87f531ac-kube-api-access-4kw2n\") pod \"keystone-operator-controller-manager-7d6f5d799-p5htp\" (UID: \"68dbc1ea-c95b-48b1-a4a3-542c87f531ac\") " pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.642543 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.643882 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.650715 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r989t" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.662051 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.663290 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz8h8\" (UniqueName: \"kubernetes.io/projected/0c561b38-c3aa-492a-bcec-9c471c3fbf0b-kube-api-access-fz8h8\") pod \"manila-operator-controller-manager-646fd589f9-4h7xc\" (UID: \"0c561b38-c3aa-492a-bcec-9c471c3fbf0b\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.663430 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8q6\" (UniqueName: \"kubernetes.io/projected/1f5d18af-662c-438a-ab53-62d6c6049921-kube-api-access-qt8q6\") pod \"neutron-operator-controller-manager-6b6c55ffd5-vhj5n\" (UID: \"1f5d18af-662c-438a-ab53-62d6c6049921\") " pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.663529 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h95d\" (UniqueName: \"kubernetes.io/projected/641c4df0-62e4-4b62-8f75-60e49bb56f7a-kube-api-access-2h95d\") pod \"mariadb-operator-controller-manager-64d7c556cd-ng9gk\" (UID: \"641c4df0-62e4-4b62-8f75-60e49bb56f7a\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.685191 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.695909 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.715638 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.720374 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.738660 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-v7llw" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.740763 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.747575 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.762147 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.763704 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.765233 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt8q6\" (UniqueName: \"kubernetes.io/projected/1f5d18af-662c-438a-ab53-62d6c6049921-kube-api-access-qt8q6\") pod \"neutron-operator-controller-manager-6b6c55ffd5-vhj5n\" (UID: \"1f5d18af-662c-438a-ab53-62d6c6049921\") " pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.765337 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmvs\" (UniqueName: \"kubernetes.io/projected/bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5-kube-api-access-6mmvs\") pod \"nova-operator-controller-manager-79d658b66d-j8flc\" (UID: \"bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.785809 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz8h8\" (UniqueName: \"kubernetes.io/projected/0c561b38-c3aa-492a-bcec-9c471c3fbf0b-kube-api-access-fz8h8\") pod \"manila-operator-controller-manager-646fd589f9-4h7xc\" (UID: \"0c561b38-c3aa-492a-bcec-9c471c3fbf0b\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.794103 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7tqkm" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.805150 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.830868 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h95d\" (UniqueName: \"kubernetes.io/projected/641c4df0-62e4-4b62-8f75-60e49bb56f7a-kube-api-access-2h95d\") pod \"mariadb-operator-controller-manager-64d7c556cd-ng9gk\" (UID: \"641c4df0-62e4-4b62-8f75-60e49bb56f7a\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.847627 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.849130 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.851193 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.852556 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.872110 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-wbjjx\" (UID: \"e99ec116-bc40-4275-b124-476b780bf9ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.872516 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jshm\" (UniqueName: \"kubernetes.io/projected/e99ec116-bc40-4275-b124-476b780bf9ca-kube-api-access-2jshm\") pod \"openstack-baremetal-operator-controller-manager-77868f484-wbjjx\" (UID: \"e99ec116-bc40-4275-b124-476b780bf9ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.872671 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqmrc\" (UniqueName: \"kubernetes.io/projected/1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0-kube-api-access-mqmrc\") pod \"octavia-operator-controller-manager-7979c68bc7-69cgp\" (UID: \"1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0\") " pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.872795 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmvs\" (UniqueName: \"kubernetes.io/projected/bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5-kube-api-access-6mmvs\") pod \"nova-operator-controller-manager-79d658b66d-j8flc\" (UID: \"bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.877119 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt8q6\" (UniqueName: \"kubernetes.io/projected/1f5d18af-662c-438a-ab53-62d6c6049921-kube-api-access-qt8q6\") pod \"neutron-operator-controller-manager-6b6c55ffd5-vhj5n\" (UID: \"1f5d18af-662c-438a-ab53-62d6c6049921\") " pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.877859 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.887451 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fvcsn" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.887624 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.909434 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.910504 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmvs\" (UniqueName: \"kubernetes.io/projected/bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5-kube-api-access-6mmvs\") pod \"nova-operator-controller-manager-79d658b66d-j8flc\" (UID: \"bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.925450 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.927707 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.940057 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.944651 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qzm5j" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.951468 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.952816 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.962989 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.965716 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bxxrh" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.977982 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh"] Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.978497 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.985200 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-wbjjx\" (UID: \"e99ec116-bc40-4275-b124-476b780bf9ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:09 crc kubenswrapper[4637]: E1201 15:00:09.985472 4637 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.985678 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jshm\" (UniqueName: \"kubernetes.io/projected/e99ec116-bc40-4275-b124-476b780bf9ca-kube-api-access-2jshm\") pod \"openstack-baremetal-operator-controller-manager-77868f484-wbjjx\" (UID: \"e99ec116-bc40-4275-b124-476b780bf9ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:09 crc kubenswrapper[4637]: E1201 15:00:09.985825 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert podName:e99ec116-bc40-4275-b124-476b780bf9ca nodeName:}" failed. No retries permitted until 2025-12-01 15:00:10.485795253 +0000 UTC m=+861.003504081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert") pod "openstack-baremetal-operator-controller-manager-77868f484-wbjjx" (UID: "e99ec116-bc40-4275-b124-476b780bf9ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.986107 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt96q\" (UniqueName: \"kubernetes.io/projected/4f425213-2aa4-419c-b672-22a94b28958a-kube-api-access-qt96q\") pod \"swift-operator-controller-manager-cc9f5bc5c-lfzmh\" (UID: \"4f425213-2aa4-419c-b672-22a94b28958a\") " pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.986190 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqmrc\" (UniqueName: \"kubernetes.io/projected/1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0-kube-api-access-mqmrc\") pod \"octavia-operator-controller-manager-7979c68bc7-69cgp\" (UID: \"1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0\") " pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" Dec 01 15:00:09 crc kubenswrapper[4637]: I1201 15:00:09.986357 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8x8\" (UniqueName: \"kubernetes.io/projected/a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9-kube-api-access-gj8x8\") pod \"ovn-operator-controller-manager-5b67cfc8fb-cnl9j\" (UID: \"a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.021627 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.022894 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.038255 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6npw8" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.047819 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jshm\" (UniqueName: \"kubernetes.io/projected/e99ec116-bc40-4275-b124-476b780bf9ca-kube-api-access-2jshm\") pod \"openstack-baremetal-operator-controller-manager-77868f484-wbjjx\" (UID: \"e99ec116-bc40-4275-b124-476b780bf9ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.057704 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.078540 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqmrc\" (UniqueName: \"kubernetes.io/projected/1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0-kube-api-access-mqmrc\") pod \"octavia-operator-controller-manager-7979c68bc7-69cgp\" (UID: \"1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0\") " pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.079556 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.146453 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.161483 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.164278 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert\") pod \"infra-operator-controller-manager-577c5f6d94-kx7cf\" (UID: \"9235c8f6-6738-496d-a945-42ba5d15afd2\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.171898 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-cqxjb" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.177221 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rthpw\" (UniqueName: \"kubernetes.io/projected/d8f49f2b-6edc-40e6-b5cf-da3e8f26009f-kube-api-access-rthpw\") pod \"placement-operator-controller-manager-867d87977b-tpjpc\" (UID: \"d8f49f2b-6edc-40e6-b5cf-da3e8f26009f\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.180611 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj8x8\" (UniqueName: \"kubernetes.io/projected/a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9-kube-api-access-gj8x8\") pod \"ovn-operator-controller-manager-5b67cfc8fb-cnl9j\" (UID: \"a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" Dec 01 15:00:10 crc kubenswrapper[4637]: E1201 15:00:10.178217 4637 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 15:00:10 crc kubenswrapper[4637]: E1201 15:00:10.192939 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert podName:9235c8f6-6738-496d-a945-42ba5d15afd2 nodeName:}" failed. No retries permitted until 2025-12-01 15:00:11.192807796 +0000 UTC m=+861.710516634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert") pod "infra-operator-controller-manager-577c5f6d94-kx7cf" (UID: "9235c8f6-6738-496d-a945-42ba5d15afd2") : secret "infra-operator-webhook-server-cert" not found Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.185039 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt96q\" (UniqueName: \"kubernetes.io/projected/4f425213-2aa4-419c-b672-22a94b28958a-kube-api-access-qt96q\") pod \"swift-operator-controller-manager-cc9f5bc5c-lfzmh\" (UID: \"4f425213-2aa4-419c-b672-22a94b28958a\") " pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.289758 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt96q\" (UniqueName: \"kubernetes.io/projected/4f425213-2aa4-419c-b672-22a94b28958a-kube-api-access-qt96q\") pod \"swift-operator-controller-manager-cc9f5bc5c-lfzmh\" (UID: \"4f425213-2aa4-419c-b672-22a94b28958a\") " pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.298193 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqnc6\" (UniqueName: \"kubernetes.io/projected/75a2f55f-977e-4608-86e3-ad7cbb948420-kube-api-access-gqnc6\") pod \"telemetry-operator-controller-manager-58487d9bf4-khr5x\" (UID: \"75a2f55f-977e-4608-86e3-ad7cbb948420\") " pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.298358 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rthpw\" (UniqueName: \"kubernetes.io/projected/d8f49f2b-6edc-40e6-b5cf-da3e8f26009f-kube-api-access-rthpw\") pod \"placement-operator-controller-manager-867d87977b-tpjpc\" (UID: \"d8f49f2b-6edc-40e6-b5cf-da3e8f26009f\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.343248 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.361795 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.364312 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.401822 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnc6\" (UniqueName: \"kubernetes.io/projected/75a2f55f-977e-4608-86e3-ad7cbb948420-kube-api-access-gqnc6\") pod \"telemetry-operator-controller-manager-58487d9bf4-khr5x\" (UID: \"75a2f55f-977e-4608-86e3-ad7cbb948420\") " pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.402785 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj8x8\" (UniqueName: \"kubernetes.io/projected/a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9-kube-api-access-gj8x8\") pod \"ovn-operator-controller-manager-5b67cfc8fb-cnl9j\" (UID: \"a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.403971 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rthpw\" (UniqueName: \"kubernetes.io/projected/d8f49f2b-6edc-40e6-b5cf-da3e8f26009f-kube-api-access-rthpw\") pod \"placement-operator-controller-manager-867d87977b-tpjpc\" (UID: \"d8f49f2b-6edc-40e6-b5cf-da3e8f26009f\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.436234 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5b867" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.454211 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.483742 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.494071 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnc6\" (UniqueName: \"kubernetes.io/projected/75a2f55f-977e-4608-86e3-ad7cbb948420-kube-api-access-gqnc6\") pod \"telemetry-operator-controller-manager-58487d9bf4-khr5x\" (UID: \"75a2f55f-977e-4608-86e3-ad7cbb948420\") " pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" Dec 01 15:00:10 crc kubenswrapper[4637]: E1201 15:00:10.520596 4637 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 15:00:10 crc kubenswrapper[4637]: E1201 15:00:10.520678 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert podName:e99ec116-bc40-4275-b124-476b780bf9ca nodeName:}" failed. No retries permitted until 2025-12-01 15:00:11.520652248 +0000 UTC m=+862.038361076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert") pod "openstack-baremetal-operator-controller-manager-77868f484-wbjjx" (UID: "e99ec116-bc40-4275-b124-476b780bf9ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.520446 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-wbjjx\" (UID: \"e99ec116-bc40-4275-b124-476b780bf9ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.521083 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2vbn\" (UniqueName: \"kubernetes.io/projected/3d463954-f36f-4cc1-9303-df4f1e7b4c0c-kube-api-access-x2vbn\") pod \"test-operator-controller-manager-77db6bf9c-j4ktr\" (UID: \"3d463954-f36f-4cc1-9303-df4f1e7b4c0c\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.579012 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.587462 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.589005 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.590458 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.616360 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.633238 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.640148 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2vbn\" (UniqueName: \"kubernetes.io/projected/3d463954-f36f-4cc1-9303-df4f1e7b4c0c-kube-api-access-x2vbn\") pod \"test-operator-controller-manager-77db6bf9c-j4ktr\" (UID: \"3d463954-f36f-4cc1-9303-df4f1e7b4c0c\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.665431 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qlzmc" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.723402 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2vbn\" (UniqueName: \"kubernetes.io/projected/3d463954-f36f-4cc1-9303-df4f1e7b4c0c-kube-api-access-x2vbn\") pod \"test-operator-controller-manager-77db6bf9c-j4ktr\" (UID: \"3d463954-f36f-4cc1-9303-df4f1e7b4c0c\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.747214 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhqr\" (UniqueName: \"kubernetes.io/projected/df856307-2e53-4198-b26b-f7cc780f6917-kube-api-access-hnhqr\") pod \"watcher-operator-controller-manager-6b56b8849f-tm79k\" (UID: \"df856307-2e53-4198-b26b-f7cc780f6917\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.750629 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.752116 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.755486 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.755779 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5q4xq" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.773097 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.783004 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.794990 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.842074 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.849440 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32635512-8e34-46b3-8285-7cdc293b15e4-cert\") pod \"openstack-operator-controller-manager-6477f85467-czzlb\" (UID: \"32635512-8e34-46b3-8285-7cdc293b15e4\") " pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.849552 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql9lv\" (UniqueName: \"kubernetes.io/projected/32635512-8e34-46b3-8285-7cdc293b15e4-kube-api-access-ql9lv\") pod \"openstack-operator-controller-manager-6477f85467-czzlb\" (UID: \"32635512-8e34-46b3-8285-7cdc293b15e4\") " pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.849625 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnhqr\" (UniqueName: \"kubernetes.io/projected/df856307-2e53-4198-b26b-f7cc780f6917-kube-api-access-hnhqr\") pod \"watcher-operator-controller-manager-6b56b8849f-tm79k\" (UID: \"df856307-2e53-4198-b26b-f7cc780f6917\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.856552 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.871541 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.872836 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.882488 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-84lfg" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.883610 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnhqr\" (UniqueName: \"kubernetes.io/projected/df856307-2e53-4198-b26b-f7cc780f6917-kube-api-access-hnhqr\") pod \"watcher-operator-controller-manager-6b56b8849f-tm79k\" (UID: \"df856307-2e53-4198-b26b-f7cc780f6917\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.913566 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.918489 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.931418 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.956131 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2"] Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.956355 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwmr\" (UniqueName: \"kubernetes.io/projected/6f0c83fd-5afa-48c8-aa05-ce507abc52c6-kube-api-access-mbwmr\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw\" (UID: \"6f0c83fd-5afa-48c8-aa05-ce507abc52c6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.957029 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32635512-8e34-46b3-8285-7cdc293b15e4-cert\") pod \"openstack-operator-controller-manager-6477f85467-czzlb\" (UID: \"32635512-8e34-46b3-8285-7cdc293b15e4\") " pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:10 crc kubenswrapper[4637]: I1201 15:00:10.957135 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql9lv\" (UniqueName: \"kubernetes.io/projected/32635512-8e34-46b3-8285-7cdc293b15e4-kube-api-access-ql9lv\") pod \"openstack-operator-controller-manager-6477f85467-czzlb\" (UID: \"32635512-8e34-46b3-8285-7cdc293b15e4\") " pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:10 crc kubenswrapper[4637]: E1201 15:00:10.957378 4637 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 15:00:10 crc kubenswrapper[4637]: E1201 15:00:10.957485 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32635512-8e34-46b3-8285-7cdc293b15e4-cert podName:32635512-8e34-46b3-8285-7cdc293b15e4 nodeName:}" failed. No retries permitted until 2025-12-01 15:00:11.457458028 +0000 UTC m=+861.975166856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/32635512-8e34-46b3-8285-7cdc293b15e4-cert") pod "openstack-operator-controller-manager-6477f85467-czzlb" (UID: "32635512-8e34-46b3-8285-7cdc293b15e4") : secret "webhook-server-cert" not found Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:10.997986 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql9lv\" (UniqueName: \"kubernetes.io/projected/32635512-8e34-46b3-8285-7cdc293b15e4-kube-api-access-ql9lv\") pod \"openstack-operator-controller-manager-6477f85467-czzlb\" (UID: \"32635512-8e34-46b3-8285-7cdc293b15e4\") " pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.058318 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbwmr\" (UniqueName: \"kubernetes.io/projected/6f0c83fd-5afa-48c8-aa05-ce507abc52c6-kube-api-access-mbwmr\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw\" (UID: \"6f0c83fd-5afa-48c8-aa05-ce507abc52c6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.130320 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbwmr\" (UniqueName: \"kubernetes.io/projected/6f0c83fd-5afa-48c8-aa05-ce507abc52c6-kube-api-access-mbwmr\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw\" (UID: \"6f0c83fd-5afa-48c8-aa05-ce507abc52c6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.256901 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.263383 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert\") pod \"infra-operator-controller-manager-577c5f6d94-kx7cf\" (UID: \"9235c8f6-6738-496d-a945-42ba5d15afd2\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.272566 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9235c8f6-6738-496d-a945-42ba5d15afd2-cert\") pod \"infra-operator-controller-manager-577c5f6d94-kx7cf\" (UID: \"9235c8f6-6738-496d-a945-42ba5d15afd2\") " pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.344173 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" event={"ID":"e552181e-b9e1-43f4-825f-649923e52631","Type":"ContainerStarted","Data":"da22225e38cc9ef193a502fe3ad30fd21fdb4a06a08803aa3cc9dd4fe86eaddf"} Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.358314 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" event={"ID":"612d1951-263e-4d58-a3ab-94f8b2ddcb68","Type":"ContainerStarted","Data":"2f0fd1801dbe4f04220c5200f6aef7a1f2cdb906461c4a5c6635db2ed228e17a"} Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.358433 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.364854 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" event={"ID":"e3fe5f37-3b9c-4d1d-9890-920cfaad9b36","Type":"ContainerStarted","Data":"24be605182c60b7e6e40c97f02887966cf29454ba381d3cb41e7dfff1355eecd"} Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.400251 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" event={"ID":"9a6330bc-2072-40b9-a81b-00d532b6b804","Type":"ContainerStarted","Data":"edee7245c3562ef091587f4dde59c606ef7c757a57c7a518539d93a40e366f60"} Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.448541 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.478829 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32635512-8e34-46b3-8285-7cdc293b15e4-cert\") pod \"openstack-operator-controller-manager-6477f85467-czzlb\" (UID: \"32635512-8e34-46b3-8285-7cdc293b15e4\") " pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.484899 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32635512-8e34-46b3-8285-7cdc293b15e4-cert\") pod \"openstack-operator-controller-manager-6477f85467-czzlb\" (UID: \"32635512-8e34-46b3-8285-7cdc293b15e4\") " pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.498884 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.509629 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.519883 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.584454 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-wbjjx\" (UID: \"e99ec116-bc40-4275-b124-476b780bf9ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.590229 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e99ec116-bc40-4275-b124-476b780bf9ca-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-wbjjx\" (UID: \"e99ec116-bc40-4275-b124-476b780bf9ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.650871 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.660297 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.694558 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:11 crc kubenswrapper[4637]: W1201 15:00:11.695184 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1836e03a_1ea3_4a52_98e5_9e6f7e04d1b0.slice/crio-5fcd6490df45685b90bf545b3d9469bc00480f0872663c87c658378549106b0a WatchSource:0}: Error finding container 5fcd6490df45685b90bf545b3d9469bc00480f0872663c87c658378549106b0a: Status 404 returned error can't find the container with id 5fcd6490df45685b90bf545b3d9469bc00480f0872663c87c658378549106b0a Dec 01 15:00:11 crc kubenswrapper[4637]: W1201 15:00:11.700158 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee97ba6c_4f2f_4a8a_b631_ae8a77b4c35b.slice/crio-6f17ac83cc262097ee432a7d2c448b1c19fd76c1d125a96aea0770eebf72a21a WatchSource:0}: Error finding container 6f17ac83cc262097ee432a7d2c448b1c19fd76c1d125a96aea0770eebf72a21a: Status 404 returned error can't find the container with id 6f17ac83cc262097ee432a7d2c448b1c19fd76c1d125a96aea0770eebf72a21a Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.701408 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.731673 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.748235 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.831784 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.831832 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.831846 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j"] Dec 01 15:00:11 crc kubenswrapper[4637]: W1201 15:00:11.849580 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf0d761a_bcaa_4b9d_8e16_5c478c9a90d5.slice/crio-f6496b937609754055371ae288bb630b12f4ff9e1ece08e448d672ab7ea6c449 WatchSource:0}: Error finding container f6496b937609754055371ae288bb630b12f4ff9e1ece08e448d672ab7ea6c449: Status 404 returned error can't find the container with id f6496b937609754055371ae288bb630b12f4ff9e1ece08e448d672ab7ea6c449 Dec 01 15:00:11 crc kubenswrapper[4637]: W1201 15:00:11.853745 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8f49f2b_6edc_40e6_b5cf_da3e8f26009f.slice/crio-a63b21d22b691d2dfd31a90d2f5197556f0ebec1490a44af552ca9cd9c773b64 WatchSource:0}: Error finding container a63b21d22b691d2dfd31a90d2f5197556f0ebec1490a44af552ca9cd9c773b64: Status 404 returned error can't find the container with id a63b21d22b691d2dfd31a90d2f5197556f0ebec1490a44af552ca9cd9c773b64 Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.875660 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr"] Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.876394 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh"] Dec 01 15:00:11 crc kubenswrapper[4637]: W1201 15:00:11.891549 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f425213_2aa4_419c_b672_22a94b28958a.slice/crio-0b81d16d0835519f5428c42484d637efc3aa4992615007e609df2b5b6a42a886 WatchSource:0}: Error finding container 0b81d16d0835519f5428c42484d637efc3aa4992615007e609df2b5b6a42a886: Status 404 returned error can't find the container with id 0b81d16d0835519f5428c42484d637efc3aa4992615007e609df2b5b6a42a886 Dec 01 15:00:11 crc kubenswrapper[4637]: E1201 15:00:11.909132 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2vbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-77db6bf9c-j4ktr_openstack-operators(3d463954-f36f-4cc1-9303-df4f1e7b4c0c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 15:00:11 crc kubenswrapper[4637]: I1201 15:00:11.932484 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x"] Dec 01 15:00:11 crc kubenswrapper[4637]: E1201 15:00:11.951003 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7477e2fea70c83cfca71e1ece83bc6fdab55e890db711b0110817a5afd97c591,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gqnc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58487d9bf4-khr5x_openstack-operators(75a2f55f-977e-4608-86e3-ad7cbb948420): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.033438 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf"] Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.045106 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k"] Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.064632 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw"] Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.100474 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnhqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6b56b8849f-tm79k_openstack-operators(df856307-2e53-4198-b26b-f7cc780f6917): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.123536 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mbwmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw_openstack-operators(6f0c83fd-5afa-48c8-aa05-ce507abc52c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.124690 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" podUID="6f0c83fd-5afa-48c8-aa05-ce507abc52c6" Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.317766 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" podUID="3d463954-f36f-4cc1-9303-df4f1e7b4c0c" Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.351874 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" podUID="75a2f55f-977e-4608-86e3-ad7cbb948420" Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.417135 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx"] Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.441241 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" event={"ID":"bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5","Type":"ContainerStarted","Data":"f6496b937609754055371ae288bb630b12f4ff9e1ece08e448d672ab7ea6c449"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.444140 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" event={"ID":"9235c8f6-6738-496d-a945-42ba5d15afd2","Type":"ContainerStarted","Data":"b54291b47bfdb62f95af4bc6e7fa5ad3dbda1352e77fa402ea39a42da59df9b5"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.449590 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" event={"ID":"641c4df0-62e4-4b62-8f75-60e49bb56f7a","Type":"ContainerStarted","Data":"8d0bfd15d0fb665f4629d93c8be7b664ace28480bdb3b446c1700adddbe2dedf"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.453068 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" event={"ID":"68dbc1ea-c95b-48b1-a4a3-542c87f531ac","Type":"ContainerStarted","Data":"420357b57ec6de647a6407945a2c7981fa2784e17cdb7390afa4f14702966c6e"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.454167 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" event={"ID":"a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9","Type":"ContainerStarted","Data":"866b4c158e5d9350f544a74c4af4d9454d9772e5bb1f7648c2dcfe3da6549ec2"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.455372 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" event={"ID":"d8f49f2b-6edc-40e6-b5cf-da3e8f26009f","Type":"ContainerStarted","Data":"a63b21d22b691d2dfd31a90d2f5197556f0ebec1490a44af552ca9cd9c773b64"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.459599 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" event={"ID":"6f0c83fd-5afa-48c8-aa05-ce507abc52c6","Type":"ContainerStarted","Data":"bceef8eccfca71f8ac52309099afbe9da9b6803954f9a6e7b4f890e74fba3050"} Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.463919 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" podUID="6f0c83fd-5afa-48c8-aa05-ce507abc52c6" Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.465325 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" event={"ID":"60365f73-6418-4fdc-901b-07a2321fdcf3","Type":"ContainerStarted","Data":"8f4baeb1489712fa2e0b077f40a855fc60193fb15c38969efe8a85983c4b9da2"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.469123 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" event={"ID":"ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b","Type":"ContainerStarted","Data":"6f17ac83cc262097ee432a7d2c448b1c19fd76c1d125a96aea0770eebf72a21a"} Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.478526 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" podUID="df856307-2e53-4198-b26b-f7cc780f6917" Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.480014 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" event={"ID":"4f425213-2aa4-419c-b672-22a94b28958a","Type":"ContainerStarted","Data":"0b81d16d0835519f5428c42484d637efc3aa4992615007e609df2b5b6a42a886"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.484073 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" event={"ID":"1f5d18af-662c-438a-ab53-62d6c6049921","Type":"ContainerStarted","Data":"77cb26031b37f0a2754eb478e463e1b12fb6f95ad01b2d785968ad47b6a6da9b"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.492405 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" event={"ID":"df856307-2e53-4198-b26b-f7cc780f6917","Type":"ContainerStarted","Data":"08bb1b3e513ac2c20a0500af576a53091b322ad0f14493608b15a259b7ae0e0c"} Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.496136 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" podUID="df856307-2e53-4198-b26b-f7cc780f6917" Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.505786 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" event={"ID":"1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0","Type":"ContainerStarted","Data":"5fcd6490df45685b90bf545b3d9469bc00480f0872663c87c658378549106b0a"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.520203 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" event={"ID":"0c561b38-c3aa-492a-bcec-9c471c3fbf0b","Type":"ContainerStarted","Data":"ab6dd0fced0ebd7f52bcac20c39714224f115c923bcba80bc6f7f6753b52bf07"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.528140 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" event={"ID":"e99ec116-bc40-4275-b124-476b780bf9ca","Type":"ContainerStarted","Data":"fc6b2a9896d13c09e971bb5dc58227d8e407a7980f9a3872f7a4240ee98d949f"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.533379 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" event={"ID":"75a2f55f-977e-4608-86e3-ad7cbb948420","Type":"ContainerStarted","Data":"341cc04769c8da628a43fca422625c967f033766c097b0fca9c68b9e6fabe526"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.533410 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" event={"ID":"75a2f55f-977e-4608-86e3-ad7cbb948420","Type":"ContainerStarted","Data":"aa3d203f1cc9c6b1a8bb146b0f67f81d2d5863d57fe4198b6add06d44cc9498e"} Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.535151 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7477e2fea70c83cfca71e1ece83bc6fdab55e890db711b0110817a5afd97c591\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" podUID="75a2f55f-977e-4608-86e3-ad7cbb948420" Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.538435 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" event={"ID":"df23c8f8-8046-4e98-a46b-cc7c829981b9","Type":"ContainerStarted","Data":"32e8835f7ff60e48f68490a290675c0ad9403a3be6a207c3239ad12742d5355b"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.550525 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb"] Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.553248 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" event={"ID":"3d463954-f36f-4cc1-9303-df4f1e7b4c0c","Type":"ContainerStarted","Data":"f1f1f712dffa33c83d1a4d5b7452810d4f46c990337013000640bdb636975849"} Dec 01 15:00:12 crc kubenswrapper[4637]: I1201 15:00:12.553302 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" event={"ID":"3d463954-f36f-4cc1-9303-df4f1e7b4c0c","Type":"ContainerStarted","Data":"f109dece2d81655a6750a7fe2947826d7fabaa6cf8fd0ec4c6497ae812f43151"} Dec 01 15:00:12 crc kubenswrapper[4637]: E1201 15:00:12.559571 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85\\\"\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" podUID="3d463954-f36f-4cc1-9303-df4f1e7b4c0c" Dec 01 15:00:13 crc kubenswrapper[4637]: I1201 15:00:13.589017 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" event={"ID":"df856307-2e53-4198-b26b-f7cc780f6917","Type":"ContainerStarted","Data":"925c276067fdc1826ab1bbf63c8ea0c51a19fee05f4e9ab249dc6cf024d854c3"} Dec 01 15:00:13 crc kubenswrapper[4637]: E1201 15:00:13.593366 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" podUID="df856307-2e53-4198-b26b-f7cc780f6917" Dec 01 15:00:13 crc kubenswrapper[4637]: I1201 15:00:13.596152 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" event={"ID":"32635512-8e34-46b3-8285-7cdc293b15e4","Type":"ContainerStarted","Data":"08e3247091bcf20afa26fd6058b7abf8a98c33d436ced663b71941dd78036e59"} Dec 01 15:00:13 crc kubenswrapper[4637]: I1201 15:00:13.596245 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" event={"ID":"32635512-8e34-46b3-8285-7cdc293b15e4","Type":"ContainerStarted","Data":"e7e4a5a0af5b2806bcaf7120a87d2393fbe8dce7cda9aca28c7f4df84c3bdec3"} Dec 01 15:00:13 crc kubenswrapper[4637]: I1201 15:00:13.596269 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" event={"ID":"32635512-8e34-46b3-8285-7cdc293b15e4","Type":"ContainerStarted","Data":"b1f62330c61df5492167577e93591a0dc7d6c6fe1e2610e3419a1235a98924fe"} Dec 01 15:00:13 crc kubenswrapper[4637]: I1201 15:00:13.597337 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:13 crc kubenswrapper[4637]: E1201 15:00:13.611724 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85\\\"\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" podUID="3d463954-f36f-4cc1-9303-df4f1e7b4c0c" Dec 01 15:00:13 crc kubenswrapper[4637]: E1201 15:00:13.611851 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7477e2fea70c83cfca71e1ece83bc6fdab55e890db711b0110817a5afd97c591\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" podUID="75a2f55f-977e-4608-86e3-ad7cbb948420" Dec 01 15:00:13 crc kubenswrapper[4637]: E1201 15:00:13.629585 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" podUID="6f0c83fd-5afa-48c8-aa05-ce507abc52c6" Dec 01 15:00:13 crc kubenswrapper[4637]: I1201 15:00:13.694459 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" podStartSLOduration=3.694441063 podStartE2EDuration="3.694441063s" podCreationTimestamp="2025-12-01 15:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:00:13.690537728 +0000 UTC m=+864.208246556" watchObservedRunningTime="2025-12-01 15:00:13.694441063 +0000 UTC m=+864.212149891" Dec 01 15:00:14 crc kubenswrapper[4637]: E1201 15:00:14.646215 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" podUID="df856307-2e53-4198-b26b-f7cc780f6917" Dec 01 15:00:21 crc kubenswrapper[4637]: I1201 15:00:21.743198 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6477f85467-czzlb" Dec 01 15:00:24 crc kubenswrapper[4637]: E1201 15:00:24.895613 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:6f630b256a17a0d40ec49bbf3bfbc65118e712cafea97fb0eee03dbc037d6bf8" Dec 01 15:00:24 crc kubenswrapper[4637]: E1201 15:00:24.899091 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:6f630b256a17a0d40ec49bbf3bfbc65118e712cafea97fb0eee03dbc037d6bf8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9zn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-577c5f6d94-kx7cf_openstack-operators(9235c8f6-6738-496d-a945-42ba5d15afd2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:25 crc kubenswrapper[4637]: E1201 15:00:25.359968 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312" Dec 01 15:00:25 crc kubenswrapper[4637]: E1201 15:00:25.360519 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2h95d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-64d7c556cd-ng9gk_openstack-operators(641c4df0-62e4-4b62-8f75-60e49bb56f7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:25 crc kubenswrapper[4637]: E1201 15:00:25.798780 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:f657fa5fddbe0d7cdf889002981a743e421cfbcfb396ec38013aa511596f45ef" Dec 01 15:00:25 crc kubenswrapper[4637]: E1201 15:00:25.799067 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:f657fa5fddbe0d7cdf889002981a743e421cfbcfb396ec38013aa511596f45ef,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqmrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7979c68bc7-69cgp_openstack-operators(1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:27 crc kubenswrapper[4637]: E1201 15:00:27.288992 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:bc58f62c7171e9c9216fdeafbd170917b638e6c3f842031ee254f1389c57a09e" Dec 01 15:00:27 crc kubenswrapper[4637]: E1201 15:00:27.289292 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:bc58f62c7171e9c9216fdeafbd170917b638e6c3f842031ee254f1389c57a09e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qt96q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-cc9f5bc5c-lfzmh_openstack-operators(4f425213-2aa4-419c-b672-22a94b28958a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:27 crc kubenswrapper[4637]: E1201 15:00:27.720555 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:9aee58b2ca71ef9c4f12373090951090d13aa7038d0fef07ec30167f3d6ae23c" Dec 01 15:00:27 crc kubenswrapper[4637]: E1201 15:00:27.720751 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:9aee58b2ca71ef9c4f12373090951090d13aa7038d0fef07ec30167f3d6ae23c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-26sn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-54485f899-c8xnq_openstack-operators(ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:29 crc kubenswrapper[4637]: E1201 15:00:29.590852 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:44c6dcec0d489a675c35e097d92729162bfc2a8cac62d7c8376943ef922e2651" Dec 01 15:00:29 crc kubenswrapper[4637]: E1201 15:00:29.591457 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:44c6dcec0d489a675c35e097d92729162bfc2a8cac62d7c8376943ef922e2651,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4d95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-748967c98-7t5h2_openstack-operators(612d1951-263e-4d58-a3ab-94f8b2ddcb68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:30 crc kubenswrapper[4637]: E1201 15:00:30.133523 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c5394efcfaeddc4231f98f1ed5267b77a8687038064cfb4302bcd0c8d6587856" Dec 01 15:00:30 crc kubenswrapper[4637]: E1201 15:00:30.134011 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c5394efcfaeddc4231f98f1ed5267b77a8687038064cfb4302bcd0c8d6587856,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jsvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-5bfbbb859d-xm2c9_openstack-operators(e3fe5f37-3b9c-4d1d-9890-920cfaad9b36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:30 crc kubenswrapper[4637]: E1201 15:00:30.806794 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:66928f0eae5206f671ac7b21f79953e37009c54187d768dc6e03fe0a3d202b3b" Dec 01 15:00:30 crc kubenswrapper[4637]: E1201 15:00:30.807328 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:66928f0eae5206f671ac7b21f79953e37009c54187d768dc6e03fe0a3d202b3b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jshm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-77868f484-wbjjx_openstack-operators(e99ec116-bc40-4275-b124-476b780bf9ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:33 crc kubenswrapper[4637]: E1201 15:00:33.400765 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d" Dec 01 15:00:33 crc kubenswrapper[4637]: E1201 15:00:33.402604 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rthpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-867d87977b-tpjpc_openstack-operators(d8f49f2b-6edc-40e6-b5cf-da3e8f26009f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:35 crc kubenswrapper[4637]: E1201 15:00:35.300782 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326" Dec 01 15:00:35 crc kubenswrapper[4637]: E1201 15:00:35.301324 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gj8x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5b67cfc8fb-cnl9j_openstack-operators(a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:35 crc kubenswrapper[4637]: E1201 15:00:35.839630 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:76ad3ddd8c89748b1d9a5f3a0b2f0f47494cdb62e2997610de7febcb12970635" Dec 01 15:00:35 crc kubenswrapper[4637]: E1201 15:00:35.839846 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:76ad3ddd8c89748b1d9a5f3a0b2f0f47494cdb62e2997610de7febcb12970635,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qt8q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6b6c55ffd5-vhj5n_openstack-operators(1f5d18af-662c-438a-ab53-62d6c6049921): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.212288 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" podUID="9235c8f6-6738-496d-a945-42ba5d15afd2" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.229957 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" podUID="1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.242611 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" podUID="4f425213-2aa4-419c-b672-22a94b28958a" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.326042 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" podUID="641c4df0-62e4-4b62-8f75-60e49bb56f7a" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.352903 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" podUID="d8f49f2b-6edc-40e6-b5cf-da3e8f26009f" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.367482 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" podUID="a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.378665 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" podUID="ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.745231 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" podUID="e99ec116-bc40-4275-b124-476b780bf9ca" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.806137 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" podUID="e3fe5f37-3b9c-4d1d-9890-920cfaad9b36" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.814145 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" podUID="612d1951-263e-4d58-a3ab-94f8b2ddcb68" Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.828043 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" podUID="1f5d18af-662c-438a-ab53-62d6c6049921" Dec 01 15:00:42 crc kubenswrapper[4637]: I1201 15:00:42.917909 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" event={"ID":"68dbc1ea-c95b-48b1-a4a3-542c87f531ac","Type":"ContainerStarted","Data":"01d6622e045698fa653cf877fe1eb911d2bff619b9f4b35872722bb7a8537ba6"} Dec 01 15:00:42 crc kubenswrapper[4637]: I1201 15:00:42.954845 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" event={"ID":"1f5d18af-662c-438a-ab53-62d6c6049921","Type":"ContainerStarted","Data":"3ba646b6e161b6e4a71f2fe98dac10542f7293b0e3cb3a2b017e526d1dfb54cb"} Dec 01 15:00:42 crc kubenswrapper[4637]: E1201 15:00:42.981247 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:76ad3ddd8c89748b1d9a5f3a0b2f0f47494cdb62e2997610de7febcb12970635\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" podUID="1f5d18af-662c-438a-ab53-62d6c6049921" Dec 01 15:00:42 crc kubenswrapper[4637]: I1201 15:00:42.982470 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" event={"ID":"d8f49f2b-6edc-40e6-b5cf-da3e8f26009f","Type":"ContainerStarted","Data":"bd3e7c9d35eed230256e19fdc95c8024abc3293812a03ddedfe7f969863c772f"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.043768 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" event={"ID":"e3fe5f37-3b9c-4d1d-9890-920cfaad9b36","Type":"ContainerStarted","Data":"e82ba99fefd560526af4da92f76cf9eaec67c82c6bf4a2f5965ec45a28d9ff4e"} Dec 01 15:00:43 crc kubenswrapper[4637]: E1201 15:00:43.044842 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" podUID="d8f49f2b-6edc-40e6-b5cf-da3e8f26009f" Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.063440 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" event={"ID":"75a2f55f-977e-4608-86e3-ad7cbb948420","Type":"ContainerStarted","Data":"d2afef80e1fd6f22192bf7bbfce740649a65f50bc61cfa52e8c226d2f59f06e3"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.064615 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.079829 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" event={"ID":"641c4df0-62e4-4b62-8f75-60e49bb56f7a","Type":"ContainerStarted","Data":"e3895b8025b7ec9e0a4636a2d98c0a61986120d79bcbd0f1a529c8779712a2cd"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.131344 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" event={"ID":"9a6330bc-2072-40b9-a81b-00d532b6b804","Type":"ContainerStarted","Data":"0de667cb1792000f0365774997fae2115f7d9f55504f55fb3685f25c337bbe88"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.136837 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" event={"ID":"3d463954-f36f-4cc1-9303-df4f1e7b4c0c","Type":"ContainerStarted","Data":"50cffab3fed3557e07e1a1537127475e4b1b77de22489f0c20b3beacc1723855"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.138396 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.172658 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" event={"ID":"612d1951-263e-4d58-a3ab-94f8b2ddcb68","Type":"ContainerStarted","Data":"c6d3b6d509005b634b722d0edb06e8cc4fc80465b60892a849526d119d7a09bf"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.193268 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" event={"ID":"6f0c83fd-5afa-48c8-aa05-ce507abc52c6","Type":"ContainerStarted","Data":"d3d7d5781ba9a4b2e126fc32340a3d567b2315883e49b1482a0b5860ba92442c"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.220176 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" event={"ID":"9235c8f6-6738-496d-a945-42ba5d15afd2","Type":"ContainerStarted","Data":"319652e0a90b09ecad8d355ee40f2968eeaa6d9eae0425b222eddb38e2f8676a"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.238125 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" event={"ID":"df23c8f8-8046-4e98-a46b-cc7c829981b9","Type":"ContainerStarted","Data":"e379e11667a9f2e2d0d7e7de7bc87122d615ed17ad1cac5d26bb276f2ca22b74"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.244848 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" event={"ID":"0c561b38-c3aa-492a-bcec-9c471c3fbf0b","Type":"ContainerStarted","Data":"9071735e2c345578913d02ca653333bfd4a69541da0fdf47be1650c0c702b904"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.261258 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" event={"ID":"1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0","Type":"ContainerStarted","Data":"2fce34047db230e1595b6706cd8e89d3de0937cf53974fb8d00fde88521b88cb"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.279758 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" event={"ID":"4f425213-2aa4-419c-b672-22a94b28958a","Type":"ContainerStarted","Data":"397a5c2b79acfbffa4e3b44f7f7fa327303a2b07f214e30339d49e0f1c304150"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.306251 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" event={"ID":"ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b","Type":"ContainerStarted","Data":"b12021c6de06cfbdac37582d9b7e786618bed5537a2080f57ddc150bed30d23b"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.310940 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" podStartSLOduration=4.338515664 podStartE2EDuration="34.310894235s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.950720756 +0000 UTC m=+862.468429584" lastFinishedPulling="2025-12-01 15:00:41.923099327 +0000 UTC m=+892.440808155" observedRunningTime="2025-12-01 15:00:43.30626381 +0000 UTC m=+893.823972638" watchObservedRunningTime="2025-12-01 15:00:43.310894235 +0000 UTC m=+893.828603063" Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.337147 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" event={"ID":"e552181e-b9e1-43f4-825f-649923e52631","Type":"ContainerStarted","Data":"9f6fabe907d26c756c1f06a25249753a9306224d1cafd1518d6437895c1e7686"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.356526 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" event={"ID":"a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9","Type":"ContainerStarted","Data":"9ceecd558a7e1c8d8a4e9f2142f8fe68aa4809582e2dddd430945d88e4b1e8b4"} Dec 01 15:00:43 crc kubenswrapper[4637]: E1201 15:00:43.359993 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" podUID="a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9" Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.362907 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" event={"ID":"e99ec116-bc40-4275-b124-476b780bf9ca","Type":"ContainerStarted","Data":"a80961950bb52960c37abf5c98ad7e5e4060b34bec23ee29952d4818e70ba1ab"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.366135 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" event={"ID":"60365f73-6418-4fdc-901b-07a2321fdcf3","Type":"ContainerStarted","Data":"b0df4a65a9070f9c9b20997507a0d1720699a43d402f483f9233dd0d7ffe9288"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.386967 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" event={"ID":"df856307-2e53-4198-b26b-f7cc780f6917","Type":"ContainerStarted","Data":"32498ad67a74b769ca94143ff201b4f9d5775fdd48d3b46f00e98cc064a63465"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.387506 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.405525 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" event={"ID":"bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5","Type":"ContainerStarted","Data":"d5a1070e5a350f37fff40b501ff29faa12d1156d2982280bb09b4b69f9cb45d3"} Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.647799 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" podStartSLOduration=4.794579234 podStartE2EDuration="34.64776846s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.908920069 +0000 UTC m=+862.426628887" lastFinishedPulling="2025-12-01 15:00:41.762109255 +0000 UTC m=+892.279818113" observedRunningTime="2025-12-01 15:00:43.567637639 +0000 UTC m=+894.085346467" watchObservedRunningTime="2025-12-01 15:00:43.64776846 +0000 UTC m=+894.165477298" Dec 01 15:00:43 crc kubenswrapper[4637]: I1201 15:00:43.848153 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw" podStartSLOduration=3.951342691 podStartE2EDuration="33.848135574s" podCreationTimestamp="2025-12-01 15:00:10 +0000 UTC" firstStartedPulling="2025-12-01 15:00:12.123393103 +0000 UTC m=+862.641101931" lastFinishedPulling="2025-12-01 15:00:42.020185976 +0000 UTC m=+892.537894814" observedRunningTime="2025-12-01 15:00:43.842996676 +0000 UTC m=+894.360705494" watchObservedRunningTime="2025-12-01 15:00:43.848135574 +0000 UTC m=+894.365844402" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.147104 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" podStartSLOduration=5.466615278 podStartE2EDuration="35.147071336s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:12.100331481 +0000 UTC m=+862.618040309" lastFinishedPulling="2025-12-01 15:00:41.780787539 +0000 UTC m=+892.298496367" observedRunningTime="2025-12-01 15:00:44.043514054 +0000 UTC m=+894.561222882" watchObservedRunningTime="2025-12-01 15:00:44.147071336 +0000 UTC m=+894.664780164" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.419528 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" event={"ID":"68dbc1ea-c95b-48b1-a4a3-542c87f531ac","Type":"ContainerStarted","Data":"d102a52619add4d5bee0d1991c650a2a46e468a908b939544a36b7bada6dcccd"} Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.421027 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.427646 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" event={"ID":"0c561b38-c3aa-492a-bcec-9c471c3fbf0b","Type":"ContainerStarted","Data":"5498fa97617dee97e5c198282a498594e5bc53b68b8b6be32d9e5cb5d3f6d062"} Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.428310 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.431429 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" event={"ID":"e3fe5f37-3b9c-4d1d-9890-920cfaad9b36","Type":"ContainerStarted","Data":"33bd33fc32cbcca01d2f3b4c2e81d859fcc61a3d7e4d3c0853210569679d73f0"} Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.437026 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.513772 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" event={"ID":"60365f73-6418-4fdc-901b-07a2321fdcf3","Type":"ContainerStarted","Data":"d31c5adaf337855d531257fcb1176bc13a0ddb56642c0b6bd3606c545dc373b9"} Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.514268 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.566052 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" podStartSLOduration=5.760204135 podStartE2EDuration="35.566027805s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.490892754 +0000 UTC m=+862.008601582" lastFinishedPulling="2025-12-01 15:00:41.296716414 +0000 UTC m=+891.814425252" observedRunningTime="2025-12-01 15:00:44.476781738 +0000 UTC m=+894.994490566" watchObservedRunningTime="2025-12-01 15:00:44.566027805 +0000 UTC m=+895.083736643" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.566674 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" podStartSLOduration=5.783268717 podStartE2EDuration="35.566667832s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.511825959 +0000 UTC m=+862.029534787" lastFinishedPulling="2025-12-01 15:00:41.295225084 +0000 UTC m=+891.812933902" observedRunningTime="2025-12-01 15:00:44.544524296 +0000 UTC m=+895.062233134" watchObservedRunningTime="2025-12-01 15:00:44.566667832 +0000 UTC m=+895.084376670" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.577750 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" event={"ID":"bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5","Type":"ContainerStarted","Data":"05ee698ceda57010e00d42cd9b7b660664afe97d0a51b8ec73f71a205118de61"} Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.578712 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.603917 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" event={"ID":"df23c8f8-8046-4e98-a46b-cc7c829981b9","Type":"ContainerStarted","Data":"3a1d670843288e02bdbf895053608e57e334315eab2c4cb512257dd675458169"} Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.604739 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.605762 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" podStartSLOduration=3.600069909 podStartE2EDuration="36.605751846s" podCreationTimestamp="2025-12-01 15:00:08 +0000 UTC" firstStartedPulling="2025-12-01 15:00:10.629221006 +0000 UTC m=+861.146929834" lastFinishedPulling="2025-12-01 15:00:43.634902953 +0000 UTC m=+894.152611771" observedRunningTime="2025-12-01 15:00:44.605104039 +0000 UTC m=+895.122812867" watchObservedRunningTime="2025-12-01 15:00:44.605751846 +0000 UTC m=+895.123460674" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.619195 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" event={"ID":"9a6330bc-2072-40b9-a81b-00d532b6b804","Type":"ContainerStarted","Data":"b522493bf2e4540d0a286f5ae11bf73bf7646aeec9ad840a7a5bd0e4270a9dcc"} Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.620066 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.662968 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" podStartSLOduration=6.857378696 podStartE2EDuration="36.662945799s" podCreationTimestamp="2025-12-01 15:00:08 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.490877294 +0000 UTC m=+862.008586122" lastFinishedPulling="2025-12-01 15:00:41.296444397 +0000 UTC m=+891.814153225" observedRunningTime="2025-12-01 15:00:44.659616189 +0000 UTC m=+895.177325017" watchObservedRunningTime="2025-12-01 15:00:44.662945799 +0000 UTC m=+895.180654637" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.680225 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" event={"ID":"e552181e-b9e1-43f4-825f-649923e52631","Type":"ContainerStarted","Data":"bb57f6435aa4e9c18b055804cb3f9f0c5d8bd1cb172464247ffb0ef5ca37ad5a"} Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.680292 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.712466 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" podStartSLOduration=6.302644787 podStartE2EDuration="35.712419793s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.880529614 +0000 UTC m=+862.398238442" lastFinishedPulling="2025-12-01 15:00:41.29030457 +0000 UTC m=+891.808013448" observedRunningTime="2025-12-01 15:00:44.70266655 +0000 UTC m=+895.220375368" watchObservedRunningTime="2025-12-01 15:00:44.712419793 +0000 UTC m=+895.230128621" Dec 01 15:00:44 crc kubenswrapper[4637]: E1201 15:00:44.737651 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:76ad3ddd8c89748b1d9a5f3a0b2f0f47494cdb62e2997610de7febcb12970635\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" podUID="1f5d18af-662c-438a-ab53-62d6c6049921" Dec 01 15:00:44 crc kubenswrapper[4637]: E1201 15:00:44.738235 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" podUID="a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.796191 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" podStartSLOduration=7.185342173 podStartE2EDuration="36.796161762s" podCreationTimestamp="2025-12-01 15:00:08 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.681721072 +0000 UTC m=+862.199429900" lastFinishedPulling="2025-12-01 15:00:41.292540601 +0000 UTC m=+891.810249489" observedRunningTime="2025-12-01 15:00:44.778046583 +0000 UTC m=+895.295755411" watchObservedRunningTime="2025-12-01 15:00:44.796161762 +0000 UTC m=+895.313870590" Dec 01 15:00:44 crc kubenswrapper[4637]: I1201 15:00:44.948606 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" podStartSLOduration=6.578235458 podStartE2EDuration="36.948558532s" podCreationTimestamp="2025-12-01 15:00:08 +0000 UTC" firstStartedPulling="2025-12-01 15:00:10.925020833 +0000 UTC m=+861.442729661" lastFinishedPulling="2025-12-01 15:00:41.295343877 +0000 UTC m=+891.813052735" observedRunningTime="2025-12-01 15:00:44.904384181 +0000 UTC m=+895.422093009" watchObservedRunningTime="2025-12-01 15:00:44.948558532 +0000 UTC m=+895.466267370" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.006445 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" podStartSLOduration=6.214096268 podStartE2EDuration="37.006407442s" podCreationTimestamp="2025-12-01 15:00:08 +0000 UTC" firstStartedPulling="2025-12-01 15:00:10.50180763 +0000 UTC m=+861.019516458" lastFinishedPulling="2025-12-01 15:00:41.294118764 +0000 UTC m=+891.811827632" observedRunningTime="2025-12-01 15:00:44.949455566 +0000 UTC m=+895.467164394" watchObservedRunningTime="2025-12-01 15:00:45.006407442 +0000 UTC m=+895.524116270" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.613332 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.613739 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.686752 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" event={"ID":"ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b","Type":"ContainerStarted","Data":"de1894c246e8fe8e3538189d3cee83fcbfb06207448e0acdabe17f21f55d27b8"} Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.687321 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.688490 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" event={"ID":"641c4df0-62e4-4b62-8f75-60e49bb56f7a","Type":"ContainerStarted","Data":"5df3532888ab4ea7396fabd650ae3c165c6359589129d83ee22b533edce33fb9"} Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.688669 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.689976 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" event={"ID":"612d1951-263e-4d58-a3ab-94f8b2ddcb68","Type":"ContainerStarted","Data":"c76b6358d1cbd157c020af722f5594cb1f187e079d44b860ea538ab48836938a"} Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.690057 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.691974 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" event={"ID":"e99ec116-bc40-4275-b124-476b780bf9ca","Type":"ContainerStarted","Data":"1f36de0ebef9cb066fda8ca1d270d0f65e6ea0a4b91d65665f78b68e3e2668e7"} Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.692123 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.693503 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" event={"ID":"1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0","Type":"ContainerStarted","Data":"934c6a6132b58ccd25076b63505b4c8787b99d7ec52d0071d7f766943e6300c4"} Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.693635 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.698368 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" event={"ID":"9235c8f6-6738-496d-a945-42ba5d15afd2","Type":"ContainerStarted","Data":"d2582c141fe220ffc75ba4cfe6855ba0836df1f334d58a2d22dcd41c2106aaa0"} Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.700309 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.709643 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" event={"ID":"4f425213-2aa4-419c-b672-22a94b28958a","Type":"ContainerStarted","Data":"1f31a6c6885bb259abdc164af606ec57b9f952875cf1c41c55902ee00cc29a8a"} Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.709692 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.787976 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" podStartSLOduration=3.909702159 podStartE2EDuration="36.78795126s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.718918404 +0000 UTC m=+862.236627232" lastFinishedPulling="2025-12-01 15:00:44.597167505 +0000 UTC m=+895.114876333" observedRunningTime="2025-12-01 15:00:45.722997148 +0000 UTC m=+896.240705986" watchObservedRunningTime="2025-12-01 15:00:45.78795126 +0000 UTC m=+896.305660088" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.788730 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" podStartSLOduration=3.332584254 podStartE2EDuration="36.78872443s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.508633433 +0000 UTC m=+862.026342261" lastFinishedPulling="2025-12-01 15:00:44.964773609 +0000 UTC m=+895.482482437" observedRunningTime="2025-12-01 15:00:45.786674046 +0000 UTC m=+896.304382874" watchObservedRunningTime="2025-12-01 15:00:45.78872443 +0000 UTC m=+896.306433258" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.840962 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" podStartSLOduration=4.398103271 podStartE2EDuration="36.840924888s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:12.098755238 +0000 UTC m=+862.616464066" lastFinishedPulling="2025-12-01 15:00:44.541576855 +0000 UTC m=+895.059285683" observedRunningTime="2025-12-01 15:00:45.835870752 +0000 UTC m=+896.353579580" watchObservedRunningTime="2025-12-01 15:00:45.840924888 +0000 UTC m=+896.358633706" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.903247 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" podStartSLOduration=4.411938435 podStartE2EDuration="36.903217989s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.904948552 +0000 UTC m=+862.422657380" lastFinishedPulling="2025-12-01 15:00:44.396228096 +0000 UTC m=+894.913936934" observedRunningTime="2025-12-01 15:00:45.899862848 +0000 UTC m=+896.417571676" watchObservedRunningTime="2025-12-01 15:00:45.903217989 +0000 UTC m=+896.420926817" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.934015 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" podStartSLOduration=4.034305761 podStartE2EDuration="37.933994469s" podCreationTimestamp="2025-12-01 15:00:08 +0000 UTC" firstStartedPulling="2025-12-01 15:00:10.950623244 +0000 UTC m=+861.468332072" lastFinishedPulling="2025-12-01 15:00:44.850311952 +0000 UTC m=+895.368020780" observedRunningTime="2025-12-01 15:00:45.933029372 +0000 UTC m=+896.450738200" watchObservedRunningTime="2025-12-01 15:00:45.933994469 +0000 UTC m=+896.451703297" Dec 01 15:00:45 crc kubenswrapper[4637]: I1201 15:00:45.998425 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" podStartSLOduration=4.154160053 podStartE2EDuration="36.998401636s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.721518735 +0000 UTC m=+862.239227563" lastFinishedPulling="2025-12-01 15:00:44.565760318 +0000 UTC m=+895.083469146" observedRunningTime="2025-12-01 15:00:45.953392192 +0000 UTC m=+896.471101020" watchObservedRunningTime="2025-12-01 15:00:45.998401636 +0000 UTC m=+896.516110464" Dec 01 15:00:46 crc kubenswrapper[4637]: I1201 15:00:46.001595 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" podStartSLOduration=4.475329974 podStartE2EDuration="37.001588031s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:12.440828454 +0000 UTC m=+862.958537282" lastFinishedPulling="2025-12-01 15:00:44.967086511 +0000 UTC m=+895.484795339" observedRunningTime="2025-12-01 15:00:45.997378638 +0000 UTC m=+896.515087466" watchObservedRunningTime="2025-12-01 15:00:46.001588031 +0000 UTC m=+896.519296859" Dec 01 15:00:47 crc kubenswrapper[4637]: I1201 15:00:47.728763 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" event={"ID":"d8f49f2b-6edc-40e6-b5cf-da3e8f26009f","Type":"ContainerStarted","Data":"f2ea176fbf52b75c5480b3a26086dfb56f455f1daa1ddfdac4516353ddbd1b51"} Dec 01 15:00:47 crc kubenswrapper[4637]: I1201 15:00:47.729874 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" Dec 01 15:00:47 crc kubenswrapper[4637]: I1201 15:00:47.747998 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" podStartSLOduration=3.8388154869999997 podStartE2EDuration="38.7479762s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.880744119 +0000 UTC m=+862.398452947" lastFinishedPulling="2025-12-01 15:00:46.789904832 +0000 UTC m=+897.307613660" observedRunningTime="2025-12-01 15:00:47.74536463 +0000 UTC m=+898.263073468" watchObservedRunningTime="2025-12-01 15:00:47.7479762 +0000 UTC m=+898.265685028" Dec 01 15:00:49 crc kubenswrapper[4637]: I1201 15:00:49.168015 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-xm2c9" Dec 01 15:00:49 crc kubenswrapper[4637]: I1201 15:00:49.229206 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-ngs55" Dec 01 15:00:49 crc kubenswrapper[4637]: I1201 15:00:49.318956 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6bd966bbd4-4c7cm" Dec 01 15:00:49 crc kubenswrapper[4637]: I1201 15:00:49.690909 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-4qp7b" Dec 01 15:00:49 crc kubenswrapper[4637]: I1201 15:00:49.745633 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7d6f5d799-p5htp" Dec 01 15:00:49 crc kubenswrapper[4637]: I1201 15:00:49.808822 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-njp6w" Dec 01 15:00:49 crc kubenswrapper[4637]: I1201 15:00:49.856649 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-4h7xc" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.061947 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-j8flc" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.177907 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7979c68bc7-69cgp" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.347393 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-cc9f5bc5c-lfzmh" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.581422 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x9rwl"] Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.586367 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.589701 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x9rwl"] Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.634161 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-utilities\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.634343 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-catalog-content\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.634419 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8cz\" (UniqueName: \"kubernetes.io/projected/95e3738f-8d83-4472-afd2-0195b99b263d-kube-api-access-9w8cz\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.736022 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-utilities\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.736523 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-catalog-content\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.736653 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-utilities\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.736676 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w8cz\" (UniqueName: \"kubernetes.io/projected/95e3738f-8d83-4472-afd2-0195b99b263d-kube-api-access-9w8cz\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.737192 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-catalog-content\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.760289 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w8cz\" (UniqueName: \"kubernetes.io/projected/95e3738f-8d83-4472-afd2-0195b99b263d-kube-api-access-9w8cz\") pod \"community-operators-x9rwl\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.787301 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-j4ktr" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.798059 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58487d9bf4-khr5x" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.911526 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:00:50 crc kubenswrapper[4637]: I1201 15:00:50.923337 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-tm79k" Dec 01 15:00:51 crc kubenswrapper[4637]: I1201 15:00:51.365263 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-577c5f6d94-kx7cf" Dec 01 15:00:51 crc kubenswrapper[4637]: I1201 15:00:51.511080 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x9rwl"] Dec 01 15:00:51 crc kubenswrapper[4637]: I1201 15:00:51.719428 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" Dec 01 15:00:51 crc kubenswrapper[4637]: I1201 15:00:51.761404 4637 generic.go:334] "Generic (PLEG): container finished" podID="95e3738f-8d83-4472-afd2-0195b99b263d" containerID="4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac" exitCode=0 Dec 01 15:00:51 crc kubenswrapper[4637]: I1201 15:00:51.761452 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9rwl" event={"ID":"95e3738f-8d83-4472-afd2-0195b99b263d","Type":"ContainerDied","Data":"4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac"} Dec 01 15:00:51 crc kubenswrapper[4637]: I1201 15:00:51.761479 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9rwl" event={"ID":"95e3738f-8d83-4472-afd2-0195b99b263d","Type":"ContainerStarted","Data":"57b395b6c3976859a2864f46b3c1e62091d504f9eb7f604e9e0c7047c46fa2d5"} Dec 01 15:00:52 crc kubenswrapper[4637]: I1201 15:00:52.770185 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9rwl" event={"ID":"95e3738f-8d83-4472-afd2-0195b99b263d","Type":"ContainerStarted","Data":"109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8"} Dec 01 15:00:53 crc kubenswrapper[4637]: I1201 15:00:53.782393 4637 generic.go:334] "Generic (PLEG): container finished" podID="95e3738f-8d83-4472-afd2-0195b99b263d" containerID="109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8" exitCode=0 Dec 01 15:00:53 crc kubenswrapper[4637]: I1201 15:00:53.783971 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9rwl" event={"ID":"95e3738f-8d83-4472-afd2-0195b99b263d","Type":"ContainerDied","Data":"109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8"} Dec 01 15:00:54 crc kubenswrapper[4637]: I1201 15:00:54.794070 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9rwl" event={"ID":"95e3738f-8d83-4472-afd2-0195b99b263d","Type":"ContainerStarted","Data":"e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7"} Dec 01 15:00:54 crc kubenswrapper[4637]: I1201 15:00:54.820015 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x9rwl" podStartSLOduration=2.229035131 podStartE2EDuration="4.819983917s" podCreationTimestamp="2025-12-01 15:00:50 +0000 UTC" firstStartedPulling="2025-12-01 15:00:51.763057365 +0000 UTC m=+902.280766193" lastFinishedPulling="2025-12-01 15:00:54.354006121 +0000 UTC m=+904.871714979" observedRunningTime="2025-12-01 15:00:54.813979955 +0000 UTC m=+905.331688783" watchObservedRunningTime="2025-12-01 15:00:54.819983917 +0000 UTC m=+905.337692755" Dec 01 15:00:57 crc kubenswrapper[4637]: I1201 15:00:57.833066 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" event={"ID":"1f5d18af-662c-438a-ab53-62d6c6049921","Type":"ContainerStarted","Data":"51e1abb4af4ec373243628108841929cf5916b270b636af9f32553e90ffe9da3"} Dec 01 15:00:57 crc kubenswrapper[4637]: I1201 15:00:57.835106 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" Dec 01 15:00:57 crc kubenswrapper[4637]: I1201 15:00:57.853704 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" podStartSLOduration=3.233886714 podStartE2EDuration="48.853682487s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.834455391 +0000 UTC m=+862.352164219" lastFinishedPulling="2025-12-01 15:00:57.454251164 +0000 UTC m=+907.971959992" observedRunningTime="2025-12-01 15:00:57.850009057 +0000 UTC m=+908.367717885" watchObservedRunningTime="2025-12-01 15:00:57.853682487 +0000 UTC m=+908.371391315" Dec 01 15:00:58 crc kubenswrapper[4637]: I1201 15:00:58.845279 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" event={"ID":"a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9","Type":"ContainerStarted","Data":"7bb27d0d879908bccf310cb7e26fe550ef73124d5c6deb9c927b92785bd5a514"} Dec 01 15:00:58 crc kubenswrapper[4637]: I1201 15:00:58.845665 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" Dec 01 15:00:58 crc kubenswrapper[4637]: I1201 15:00:58.862589 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" podStartSLOduration=3.244828567 podStartE2EDuration="49.862565845s" podCreationTimestamp="2025-12-01 15:00:09 +0000 UTC" firstStartedPulling="2025-12-01 15:00:11.880084261 +0000 UTC m=+862.397793089" lastFinishedPulling="2025-12-01 15:00:58.497821529 +0000 UTC m=+909.015530367" observedRunningTime="2025-12-01 15:00:58.860923139 +0000 UTC m=+909.378631967" watchObservedRunningTime="2025-12-01 15:00:58.862565845 +0000 UTC m=+909.380274683" Dec 01 15:00:59 crc kubenswrapper[4637]: I1201 15:00:59.250177 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-748967c98-7t5h2" Dec 01 15:00:59 crc kubenswrapper[4637]: I1201 15:00:59.881560 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-ng9gk" Dec 01 15:00:59 crc kubenswrapper[4637]: I1201 15:00:59.915059 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-54485f899-c8xnq" Dec 01 15:01:00 crc kubenswrapper[4637]: I1201 15:01:00.487331 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-867d87977b-tpjpc" Dec 01 15:01:00 crc kubenswrapper[4637]: I1201 15:01:00.912077 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:01:00 crc kubenswrapper[4637]: I1201 15:01:00.912128 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:01:00 crc kubenswrapper[4637]: I1201 15:01:00.958223 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:01:01 crc kubenswrapper[4637]: I1201 15:01:01.915885 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:01:01 crc kubenswrapper[4637]: I1201 15:01:01.974701 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x9rwl"] Dec 01 15:01:03 crc kubenswrapper[4637]: I1201 15:01:03.929348 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x9rwl" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" containerName="registry-server" containerID="cri-o://e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7" gracePeriod=2 Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.307136 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.484118 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-utilities\") pod \"95e3738f-8d83-4472-afd2-0195b99b263d\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.484210 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w8cz\" (UniqueName: \"kubernetes.io/projected/95e3738f-8d83-4472-afd2-0195b99b263d-kube-api-access-9w8cz\") pod \"95e3738f-8d83-4472-afd2-0195b99b263d\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.484231 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-catalog-content\") pod \"95e3738f-8d83-4472-afd2-0195b99b263d\" (UID: \"95e3738f-8d83-4472-afd2-0195b99b263d\") " Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.485534 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-utilities" (OuterVolumeSpecName: "utilities") pod "95e3738f-8d83-4472-afd2-0195b99b263d" (UID: "95e3738f-8d83-4472-afd2-0195b99b263d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.489104 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.504247 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e3738f-8d83-4472-afd2-0195b99b263d-kube-api-access-9w8cz" (OuterVolumeSpecName: "kube-api-access-9w8cz") pod "95e3738f-8d83-4472-afd2-0195b99b263d" (UID: "95e3738f-8d83-4472-afd2-0195b99b263d"). InnerVolumeSpecName "kube-api-access-9w8cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.545734 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95e3738f-8d83-4472-afd2-0195b99b263d" (UID: "95e3738f-8d83-4472-afd2-0195b99b263d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.591575 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e3738f-8d83-4472-afd2-0195b99b263d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.591639 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w8cz\" (UniqueName: \"kubernetes.io/projected/95e3738f-8d83-4472-afd2-0195b99b263d-kube-api-access-9w8cz\") on node \"crc\" DevicePath \"\"" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.942061 4637 generic.go:334] "Generic (PLEG): container finished" podID="95e3738f-8d83-4472-afd2-0195b99b263d" containerID="e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7" exitCode=0 Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.942140 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9rwl" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.942184 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9rwl" event={"ID":"95e3738f-8d83-4472-afd2-0195b99b263d","Type":"ContainerDied","Data":"e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7"} Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.942690 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9rwl" event={"ID":"95e3738f-8d83-4472-afd2-0195b99b263d","Type":"ContainerDied","Data":"57b395b6c3976859a2864f46b3c1e62091d504f9eb7f604e9e0c7047c46fa2d5"} Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.942737 4637 scope.go:117] "RemoveContainer" containerID="e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.968007 4637 scope.go:117] "RemoveContainer" containerID="109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8" Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.983961 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x9rwl"] Dec 01 15:01:04 crc kubenswrapper[4637]: I1201 15:01:04.997572 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x9rwl"] Dec 01 15:01:05 crc kubenswrapper[4637]: I1201 15:01:05.005115 4637 scope.go:117] "RemoveContainer" containerID="4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac" Dec 01 15:01:05 crc kubenswrapper[4637]: I1201 15:01:05.034195 4637 scope.go:117] "RemoveContainer" containerID="e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7" Dec 01 15:01:05 crc kubenswrapper[4637]: E1201 15:01:05.034864 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7\": container with ID starting with e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7 not found: ID does not exist" containerID="e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7" Dec 01 15:01:05 crc kubenswrapper[4637]: I1201 15:01:05.034923 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7"} err="failed to get container status \"e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7\": rpc error: code = NotFound desc = could not find container \"e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7\": container with ID starting with e20564efd59072191dca8f3f8de2d4fa6708c69f9605655faf0bb2fe106aaea7 not found: ID does not exist" Dec 01 15:01:05 crc kubenswrapper[4637]: I1201 15:01:05.034984 4637 scope.go:117] "RemoveContainer" containerID="109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8" Dec 01 15:01:05 crc kubenswrapper[4637]: E1201 15:01:05.035421 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8\": container with ID starting with 109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8 not found: ID does not exist" containerID="109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8" Dec 01 15:01:05 crc kubenswrapper[4637]: I1201 15:01:05.035474 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8"} err="failed to get container status \"109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8\": rpc error: code = NotFound desc = could not find container \"109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8\": container with ID starting with 109255c783c223c4386d95e24600301e0126f44df325a671876f8a6c8c1dcfe8 not found: ID does not exist" Dec 01 15:01:05 crc kubenswrapper[4637]: I1201 15:01:05.035503 4637 scope.go:117] "RemoveContainer" containerID="4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac" Dec 01 15:01:05 crc kubenswrapper[4637]: E1201 15:01:05.035804 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac\": container with ID starting with 4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac not found: ID does not exist" containerID="4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac" Dec 01 15:01:05 crc kubenswrapper[4637]: I1201 15:01:05.035838 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac"} err="failed to get container status \"4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac\": rpc error: code = NotFound desc = could not find container \"4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac\": container with ID starting with 4dea8d5e2f1e927d1a80b7eea34849a41341eb251ee11be7c41dae7930d95dac not found: ID does not exist" Dec 01 15:01:05 crc kubenswrapper[4637]: I1201 15:01:05.781558 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" path="/var/lib/kubelet/pods/95e3738f-8d83-4472-afd2-0195b99b263d/volumes" Dec 01 15:01:09 crc kubenswrapper[4637]: I1201 15:01:09.982771 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6b6c55ffd5-vhj5n" Dec 01 15:01:10 crc kubenswrapper[4637]: I1201 15:01:10.619479 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-cnl9j" Dec 01 15:01:15 crc kubenswrapper[4637]: I1201 15:01:15.613896 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:01:15 crc kubenswrapper[4637]: I1201 15:01:15.615194 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.019458 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dtl4w"] Dec 01 15:01:28 crc kubenswrapper[4637]: E1201 15:01:28.023510 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" containerName="extract-utilities" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.023535 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" containerName="extract-utilities" Dec 01 15:01:28 crc kubenswrapper[4637]: E1201 15:01:28.023596 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" containerName="registry-server" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.023605 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" containerName="registry-server" Dec 01 15:01:28 crc kubenswrapper[4637]: E1201 15:01:28.023632 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" containerName="extract-content" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.023641 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" containerName="extract-content" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.023842 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e3738f-8d83-4472-afd2-0195b99b263d" containerName="registry-server" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.024801 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.027640 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.027872 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-x9txk" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.028069 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.042918 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.053218 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t87st\" (UniqueName: \"kubernetes.io/projected/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-kube-api-access-t87st\") pod \"dnsmasq-dns-675f4bcbfc-dtl4w\" (UID: \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.053377 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-config\") pod \"dnsmasq-dns-675f4bcbfc-dtl4w\" (UID: \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.074289 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dtl4w"] Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.123221 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-82qbr"] Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.125011 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: W1201 15:01:28.128574 4637 reflector.go:561] object-"openstack"/"dns-svc": failed to list *v1.ConfigMap: configmaps "dns-svc" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 01 15:01:28 crc kubenswrapper[4637]: E1201 15:01:28.128617 4637 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-svc\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"dns-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.155748 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-config\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.155873 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97k94\" (UniqueName: \"kubernetes.io/projected/5d995cbf-5781-40e4-bb41-5f95f541f3c9-kube-api-access-97k94\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.155919 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t87st\" (UniqueName: \"kubernetes.io/projected/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-kube-api-access-t87st\") pod \"dnsmasq-dns-675f4bcbfc-dtl4w\" (UID: \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.155969 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-config\") pod \"dnsmasq-dns-675f4bcbfc-dtl4w\" (UID: \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.156023 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.157981 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-config\") pod \"dnsmasq-dns-675f4bcbfc-dtl4w\" (UID: \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.197871 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t87st\" (UniqueName: \"kubernetes.io/projected/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-kube-api-access-t87st\") pod \"dnsmasq-dns-675f4bcbfc-dtl4w\" (UID: \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.257455 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.257563 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-config\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.257626 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97k94\" (UniqueName: \"kubernetes.io/projected/5d995cbf-5781-40e4-bb41-5f95f541f3c9-kube-api-access-97k94\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.259122 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-config\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.275417 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-82qbr"] Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.285286 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97k94\" (UniqueName: \"kubernetes.io/projected/5d995cbf-5781-40e4-bb41-5f95f541f3c9-kube-api-access-97k94\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.348468 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.728388 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dtl4w"] Dec 01 15:01:28 crc kubenswrapper[4637]: I1201 15:01:28.745978 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.154167 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" event={"ID":"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9","Type":"ContainerStarted","Data":"c3430ac16b5537653d7cd9dffe1dfe0828cdfbba509ef355650d1e98ce4bc5b3"} Dec 01 15:01:29 crc kubenswrapper[4637]: E1201 15:01:29.258451 4637 configmap.go:193] Couldn't get configMap openstack/dns-svc: failed to sync configmap cache: timed out waiting for the condition Dec 01 15:01:29 crc kubenswrapper[4637]: E1201 15:01:29.258571 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc podName:5d995cbf-5781-40e4-bb41-5f95f541f3c9 nodeName:}" failed. No retries permitted until 2025-12-01 15:01:29.758545933 +0000 UTC m=+940.276254761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc") pod "dnsmasq-dns-78dd6ddcc-82qbr" (UID: "5d995cbf-5781-40e4-bb41-5f95f541f3c9") : failed to sync configmap cache: timed out waiting for the condition Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.480706 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.575957 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dtl4w"] Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.621808 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-hjkrb"] Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.623418 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.660625 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-hjkrb"] Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.682115 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.682189 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-config\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.682227 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf7px\" (UniqueName: \"kubernetes.io/projected/a79142fd-eac0-49db-8c5b-11cce20542ef-kube-api-access-pf7px\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.783678 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf7px\" (UniqueName: \"kubernetes.io/projected/a79142fd-eac0-49db-8c5b-11cce20542ef-kube-api-access-pf7px\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.783749 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.783786 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.783837 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-config\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.784864 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-config\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.785771 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-82qbr\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.786513 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.836131 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf7px\" (UniqueName: \"kubernetes.io/projected/a79142fd-eac0-49db-8c5b-11cce20542ef-kube-api-access-pf7px\") pod \"dnsmasq-dns-5ccc8479f9-hjkrb\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.944201 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:01:29 crc kubenswrapper[4637]: I1201 15:01:29.963712 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.302396 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-hjkrb"] Dec 01 15:01:30 crc kubenswrapper[4637]: W1201 15:01:30.308149 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda79142fd_eac0_49db_8c5b_11cce20542ef.slice/crio-ed60ce184f7e230afac3cacb57c28b872ca0bbb301fc1c8b457ea68e159dbd6e WatchSource:0}: Error finding container ed60ce184f7e230afac3cacb57c28b872ca0bbb301fc1c8b457ea68e159dbd6e: Status 404 returned error can't find the container with id ed60ce184f7e230afac3cacb57c28b872ca0bbb301fc1c8b457ea68e159dbd6e Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.388567 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-82qbr"] Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.463001 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8wt54"] Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.471740 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.515493 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.515545 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkkw9\" (UniqueName: \"kubernetes.io/projected/25ee40bc-05f5-4401-9fba-b3333df6b27e-kube-api-access-tkkw9\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.515584 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-config\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.544228 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8wt54"] Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.617218 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkkw9\" (UniqueName: \"kubernetes.io/projected/25ee40bc-05f5-4401-9fba-b3333df6b27e-kube-api-access-tkkw9\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.617279 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-config\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.617343 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.618175 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.618563 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-config\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.640639 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkkw9\" (UniqueName: \"kubernetes.io/projected/25ee40bc-05f5-4401-9fba-b3333df6b27e-kube-api-access-tkkw9\") pod \"dnsmasq-dns-57d769cc4f-8wt54\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.715677 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-82qbr"] Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.848586 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.855869 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.866254 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.868595 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.886131 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.886427 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gh8nl" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.886613 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.886760 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.886881 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.887020 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 01 15:01:30 crc kubenswrapper[4637]: I1201 15:01:30.906942 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.032763 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fcxg\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-kube-api-access-2fcxg\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.032845 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.032867 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.032886 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.032913 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.032947 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.032992 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.033019 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.033071 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.033110 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.033127 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.134908 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135079 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135108 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fcxg\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-kube-api-access-2fcxg\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135132 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135154 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135178 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135195 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135216 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135248 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135275 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.135313 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.137045 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.137432 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.137723 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.138437 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.140027 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.144565 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.145085 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.154197 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fcxg\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-kube-api-access-2fcxg\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.159579 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.165622 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.176436 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.181284 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.206259 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" event={"ID":"5d995cbf-5781-40e4-bb41-5f95f541f3c9","Type":"ContainerStarted","Data":"6b726ecda5bffdc433606a43b48386e54f62b4362fdaa386e55f8f4795af7c0f"} Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.207808 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" event={"ID":"a79142fd-eac0-49db-8c5b-11cce20542ef","Type":"ContainerStarted","Data":"ed60ce184f7e230afac3cacb57c28b872ca0bbb301fc1c8b457ea68e159dbd6e"} Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.256616 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.538648 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8wt54"] Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.599269 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.603276 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.607963 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.608706 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.613057 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.613226 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.613077 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.613351 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-w98tp" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.613783 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.618528 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.749456 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.749814 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.749861 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmpl\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-kube-api-access-rgmpl\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.749884 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.749917 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8eeaa55a-2c35-480c-baec-134ef1158e66-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.749953 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8eeaa55a-2c35-480c-baec-134ef1158e66-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.749983 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.750005 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-config-data\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.750044 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.750066 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.750083 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851553 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851609 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851636 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851684 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851709 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851743 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgmpl\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-kube-api-access-rgmpl\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851763 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851794 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8eeaa55a-2c35-480c-baec-134ef1158e66-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851812 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8eeaa55a-2c35-480c-baec-134ef1158e66-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851848 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.851869 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-config-data\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.853419 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.853476 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.855076 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.855807 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-config-data\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.856528 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.858301 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.870992 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8eeaa55a-2c35-480c-baec-134ef1158e66-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.874997 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.884329 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.898918 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8eeaa55a-2c35-480c-baec-134ef1158e66-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.907416 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgmpl\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-kube-api-access-rgmpl\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.944101 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.954828 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:01:31 crc kubenswrapper[4637]: I1201 15:01:31.956055 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 15:01:31 crc kubenswrapper[4637]: W1201 15:01:31.979047 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbee806ff_8bec_49d0_a47f_bfd8edbb36fb.slice/crio-657d3c1ef5c2bdbec8c3233bd82db3da47270b0ef337da438738022f29d78689 WatchSource:0}: Error finding container 657d3c1ef5c2bdbec8c3233bd82db3da47270b0ef337da438738022f29d78689: Status 404 returned error can't find the container with id 657d3c1ef5c2bdbec8c3233bd82db3da47270b0ef337da438738022f29d78689 Dec 01 15:01:32 crc kubenswrapper[4637]: I1201 15:01:32.241964 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bee806ff-8bec-49d0-a47f-bfd8edbb36fb","Type":"ContainerStarted","Data":"657d3c1ef5c2bdbec8c3233bd82db3da47270b0ef337da438738022f29d78689"} Dec 01 15:01:32 crc kubenswrapper[4637]: I1201 15:01:32.248672 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" event={"ID":"25ee40bc-05f5-4401-9fba-b3333df6b27e","Type":"ContainerStarted","Data":"52282add689840593950d861db13a1fe1c82ca9f133a01452632202ba6d23b2e"} Dec 01 15:01:32 crc kubenswrapper[4637]: I1201 15:01:32.646537 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:01:32 crc kubenswrapper[4637]: W1201 15:01:32.662785 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eeaa55a_2c35_480c_baec_134ef1158e66.slice/crio-9dc69a07152fc7b789ce5c18528ee0c387da8530c53f963f9fa9c1e1e7b733ee WatchSource:0}: Error finding container 9dc69a07152fc7b789ce5c18528ee0c387da8530c53f963f9fa9c1e1e7b733ee: Status 404 returned error can't find the container with id 9dc69a07152fc7b789ce5c18528ee0c387da8530c53f963f9fa9c1e1e7b733ee Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.096628 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.129630 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.132985 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zqwpp" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.133297 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.139414 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.140110 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.140505 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.154401 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.156866 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.262415 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dae9e33c-c07e-4c13-8104-d1310d91de8c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.262998 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.263088 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-secrets\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.263127 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.263449 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.263614 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzns4\" (UniqueName: \"kubernetes.io/projected/dae9e33c-c07e-4c13-8104-d1310d91de8c-kube-api-access-wzns4\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.263943 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.263989 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.264036 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.330906 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8eeaa55a-2c35-480c-baec-134ef1158e66","Type":"ContainerStarted","Data":"9dc69a07152fc7b789ce5c18528ee0c387da8530c53f963f9fa9c1e1e7b733ee"} Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.369722 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.369848 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzns4\" (UniqueName: \"kubernetes.io/projected/dae9e33c-c07e-4c13-8104-d1310d91de8c-kube-api-access-wzns4\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.370011 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.370091 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.370162 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.370224 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dae9e33c-c07e-4c13-8104-d1310d91de8c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.370260 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.370316 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-secrets\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.370378 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.370664 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.371367 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.372040 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.375381 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dae9e33c-c07e-4c13-8104-d1310d91de8c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.381825 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dae9e33c-c07e-4c13-8104-d1310d91de8c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.395706 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.405141 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-secrets\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.417405 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzns4\" (UniqueName: \"kubernetes.io/projected/dae9e33c-c07e-4c13-8104-d1310d91de8c-kube-api-access-wzns4\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.420343 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae9e33c-c07e-4c13-8104-d1310d91de8c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.442003 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dae9e33c-c07e-4c13-8104-d1310d91de8c\") " pod="openstack/openstack-galera-0" Dec 01 15:01:33 crc kubenswrapper[4637]: I1201 15:01:33.515319 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.303356 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.422379 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.425051 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.429495 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.429869 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nx5b6" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.430103 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.433335 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.445671 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503115 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503189 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503226 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48glk\" (UniqueName: \"kubernetes.io/projected/3cfe4e59-0e72-4440-b962-2f86664cb2d7-kube-api-access-48glk\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503255 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503273 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503293 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503330 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503367 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.503390 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3cfe4e59-0e72-4440-b962-2f86664cb2d7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605109 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605508 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605544 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605599 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605621 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3cfe4e59-0e72-4440-b962-2f86664cb2d7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605673 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605770 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605803 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48glk\" (UniqueName: \"kubernetes.io/projected/3cfe4e59-0e72-4440-b962-2f86664cb2d7-kube-api-access-48glk\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.605831 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.607300 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.607683 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3cfe4e59-0e72-4440-b962-2f86664cb2d7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.608211 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.619421 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.620129 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3cfe4e59-0e72-4440-b962-2f86664cb2d7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.622751 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.631294 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.637680 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3cfe4e59-0e72-4440-b962-2f86664cb2d7-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.649824 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48glk\" (UniqueName: \"kubernetes.io/projected/3cfe4e59-0e72-4440-b962-2f86664cb2d7-kube-api-access-48glk\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.713380 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3cfe4e59-0e72-4440-b962-2f86664cb2d7\") " pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.761202 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.762552 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.775418 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.775675 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.775807 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4cslx" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.788302 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.793040 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.810084 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5903ea37-db92-4a76-afb4-14cfa23415d0-config-data\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.810134 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5903ea37-db92-4a76-afb4-14cfa23415d0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.810181 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5903ea37-db92-4a76-afb4-14cfa23415d0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.810233 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5903ea37-db92-4a76-afb4-14cfa23415d0-kolla-config\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.810315 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrjvd\" (UniqueName: \"kubernetes.io/projected/5903ea37-db92-4a76-afb4-14cfa23415d0-kube-api-access-nrjvd\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.916620 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrjvd\" (UniqueName: \"kubernetes.io/projected/5903ea37-db92-4a76-afb4-14cfa23415d0-kube-api-access-nrjvd\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.916688 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5903ea37-db92-4a76-afb4-14cfa23415d0-config-data\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.916721 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5903ea37-db92-4a76-afb4-14cfa23415d0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.916749 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5903ea37-db92-4a76-afb4-14cfa23415d0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.916778 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5903ea37-db92-4a76-afb4-14cfa23415d0-kolla-config\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.917865 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5903ea37-db92-4a76-afb4-14cfa23415d0-config-data\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.918060 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5903ea37-db92-4a76-afb4-14cfa23415d0-kolla-config\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.931799 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5903ea37-db92-4a76-afb4-14cfa23415d0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.960637 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5903ea37-db92-4a76-afb4-14cfa23415d0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:34 crc kubenswrapper[4637]: I1201 15:01:34.986058 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrjvd\" (UniqueName: \"kubernetes.io/projected/5903ea37-db92-4a76-afb4-14cfa23415d0-kube-api-access-nrjvd\") pod \"memcached-0\" (UID: \"5903ea37-db92-4a76-afb4-14cfa23415d0\") " pod="openstack/memcached-0" Dec 01 15:01:35 crc kubenswrapper[4637]: I1201 15:01:35.143474 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 01 15:01:35 crc kubenswrapper[4637]: I1201 15:01:35.468198 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dae9e33c-c07e-4c13-8104-d1310d91de8c","Type":"ContainerStarted","Data":"f66b9f11166c541d47353fa51855cc3bd4514454e0defe8a9de40725db80fa1c"} Dec 01 15:01:35 crc kubenswrapper[4637]: I1201 15:01:35.697851 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 15:01:36 crc kubenswrapper[4637]: I1201 15:01:36.032848 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 01 15:01:36 crc kubenswrapper[4637]: I1201 15:01:36.498250 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5903ea37-db92-4a76-afb4-14cfa23415d0","Type":"ContainerStarted","Data":"5abbf22ac78c89f5027ffaab1024e819035ced40ef3de028180e81153f646eb7"} Dec 01 15:01:36 crc kubenswrapper[4637]: I1201 15:01:36.517853 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3cfe4e59-0e72-4440-b962-2f86664cb2d7","Type":"ContainerStarted","Data":"60e7c5273d3e3f692fadf6cbde9fcdb846a7a58215bd226822831f6670b9a367"} Dec 01 15:01:36 crc kubenswrapper[4637]: I1201 15:01:36.895643 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:01:36 crc kubenswrapper[4637]: I1201 15:01:36.899502 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 15:01:36 crc kubenswrapper[4637]: I1201 15:01:36.904904 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wtlsm" Dec 01 15:01:36 crc kubenswrapper[4637]: I1201 15:01:36.931323 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:01:37 crc kubenswrapper[4637]: I1201 15:01:37.013853 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxk6\" (UniqueName: \"kubernetes.io/projected/d2a7716a-d336-4d98-97c6-32b6c8b18e28-kube-api-access-rwxk6\") pod \"kube-state-metrics-0\" (UID: \"d2a7716a-d336-4d98-97c6-32b6c8b18e28\") " pod="openstack/kube-state-metrics-0" Dec 01 15:01:37 crc kubenswrapper[4637]: I1201 15:01:37.117560 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwxk6\" (UniqueName: \"kubernetes.io/projected/d2a7716a-d336-4d98-97c6-32b6c8b18e28-kube-api-access-rwxk6\") pod \"kube-state-metrics-0\" (UID: \"d2a7716a-d336-4d98-97c6-32b6c8b18e28\") " pod="openstack/kube-state-metrics-0" Dec 01 15:01:37 crc kubenswrapper[4637]: I1201 15:01:37.151823 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwxk6\" (UniqueName: \"kubernetes.io/projected/d2a7716a-d336-4d98-97c6-32b6c8b18e28-kube-api-access-rwxk6\") pod \"kube-state-metrics-0\" (UID: \"d2a7716a-d336-4d98-97c6-32b6c8b18e28\") " pod="openstack/kube-state-metrics-0" Dec 01 15:01:37 crc kubenswrapper[4637]: I1201 15:01:37.294461 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 15:01:38 crc kubenswrapper[4637]: I1201 15:01:38.252903 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:01:38 crc kubenswrapper[4637]: W1201 15:01:38.292486 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2a7716a_d336_4d98_97c6_32b6c8b18e28.slice/crio-5e211dae9c4e9c11d896850e549961bec81e1e8eb5f023a60f714f6de37ef4a9 WatchSource:0}: Error finding container 5e211dae9c4e9c11d896850e549961bec81e1e8eb5f023a60f714f6de37ef4a9: Status 404 returned error can't find the container with id 5e211dae9c4e9c11d896850e549961bec81e1e8eb5f023a60f714f6de37ef4a9 Dec 01 15:01:38 crc kubenswrapper[4637]: I1201 15:01:38.598083 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2a7716a-d336-4d98-97c6-32b6c8b18e28","Type":"ContainerStarted","Data":"5e211dae9c4e9c11d896850e549961bec81e1e8eb5f023a60f714f6de37ef4a9"} Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.313061 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.315072 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.318036 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.319473 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-vcjnd" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.319569 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.319483 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.319793 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.342379 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.426996 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dhqng"] Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.434862 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/db3ed190-cdcc-4547-b48f-d09f6e881dfb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.435550 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.436418 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3ed190-cdcc-4547-b48f-d09f6e881dfb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.437381 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3ed190-cdcc-4547-b48f-d09f6e881dfb-config\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.437504 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6fc\" (UniqueName: \"kubernetes.io/projected/db3ed190-cdcc-4547-b48f-d09f6e881dfb-kube-api-access-xm6fc\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.437602 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.437759 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.437846 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.442919 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.452922 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dqs8m" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.453346 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.453879 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.463990 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dhqng"] Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.481731 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9pxbh"] Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.485435 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.493413 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9pxbh"] Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.539394 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3ed190-cdcc-4547-b48f-d09f6e881dfb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.540189 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4037f80-7861-4283-99d5-2b078ef3de4b-scripts\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.540342 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-run\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.540484 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-run\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.540604 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-run-ovn\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.540796 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-etc-ovs\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.541279 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-combined-ca-bundle\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.541459 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3ed190-cdcc-4547-b48f-d09f6e881dfb-config\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.542638 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm6fc\" (UniqueName: \"kubernetes.io/projected/db3ed190-cdcc-4547-b48f-d09f6e881dfb-kube-api-access-xm6fc\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.543988 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-lib\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.544157 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.544271 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxbn\" (UniqueName: \"kubernetes.io/projected/e4037f80-7861-4283-99d5-2b078ef3de4b-kube-api-access-kdxbn\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.544358 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-log-ovn\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.544531 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-log\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.544639 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.544779 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.541213 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3ed190-cdcc-4547-b48f-d09f6e881dfb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.542507 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3ed190-cdcc-4547-b48f-d09f6e881dfb-config\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.545135 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/db3ed190-cdcc-4547-b48f-d09f6e881dfb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.545789 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-scripts\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.545917 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhjb2\" (UniqueName: \"kubernetes.io/projected/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-kube-api-access-mhjb2\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.546030 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.546285 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-ovn-controller-tls-certs\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.545733 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.545885 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/db3ed190-cdcc-4547-b48f-d09f6e881dfb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.553864 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.561773 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.564543 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db3ed190-cdcc-4547-b48f-d09f6e881dfb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.564880 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm6fc\" (UniqueName: \"kubernetes.io/projected/db3ed190-cdcc-4547-b48f-d09f6e881dfb-kube-api-access-xm6fc\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.580835 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"db3ed190-cdcc-4547-b48f-d09f6e881dfb\") " pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.649102 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxbn\" (UniqueName: \"kubernetes.io/projected/e4037f80-7861-4283-99d5-2b078ef3de4b-kube-api-access-kdxbn\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.651345 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-log-ovn\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.651674 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-log-ovn\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.656259 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-log\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.655397 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-log\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.657085 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhjb2\" (UniqueName: \"kubernetes.io/projected/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-kube-api-access-mhjb2\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.657780 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-scripts\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.658017 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-ovn-controller-tls-certs\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.658135 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4037f80-7861-4283-99d5-2b078ef3de4b-scripts\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.658289 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-run\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.658516 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-run\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.658641 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-run-ovn\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.658828 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-etc-ovs\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.659077 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-combined-ca-bundle\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.659336 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-lib\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.659710 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-lib\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.663705 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-scripts\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.675353 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.682098 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-etc-ovs\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.682669 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-ovn-controller-tls-certs\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.683280 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-run-ovn\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.683455 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-var-run\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.684226 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4037f80-7861-4283-99d5-2b078ef3de4b-var-run\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.685347 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4037f80-7861-4283-99d5-2b078ef3de4b-scripts\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.686015 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-combined-ca-bundle\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.688820 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhjb2\" (UniqueName: \"kubernetes.io/projected/8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e-kube-api-access-mhjb2\") pod \"ovn-controller-dhqng\" (UID: \"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e\") " pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.692279 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxbn\" (UniqueName: \"kubernetes.io/projected/e4037f80-7861-4283-99d5-2b078ef3de4b-kube-api-access-kdxbn\") pod \"ovn-controller-ovs-9pxbh\" (UID: \"e4037f80-7861-4283-99d5-2b078ef3de4b\") " pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.790518 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dhqng" Dec 01 15:01:41 crc kubenswrapper[4637]: I1201 15:01:41.819494 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:01:42 crc kubenswrapper[4637]: I1201 15:01:42.527288 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dhqng"] Dec 01 15:01:42 crc kubenswrapper[4637]: W1201 15:01:42.551627 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b8fa6a1_6ff7_4e51_a482_ad230a0cc88e.slice/crio-89d5e88dfc3b5a014b1e252c404d66f929007df3beae350c4d6e6198c0a39981 WatchSource:0}: Error finding container 89d5e88dfc3b5a014b1e252c404d66f929007df3beae350c4d6e6198c0a39981: Status 404 returned error can't find the container with id 89d5e88dfc3b5a014b1e252c404d66f929007df3beae350c4d6e6198c0a39981 Dec 01 15:01:42 crc kubenswrapper[4637]: I1201 15:01:42.722261 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dhqng" event={"ID":"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e","Type":"ContainerStarted","Data":"89d5e88dfc3b5a014b1e252c404d66f929007df3beae350c4d6e6198c0a39981"} Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.026218 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.153494 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9pxbh"] Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.302300 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-f4khp"] Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.303568 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.307904 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.341093 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-f4khp"] Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.428534 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf139f21-cf2f-4ef1-9474-9c785a02053e-config\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.428990 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bf139f21-cf2f-4ef1-9474-9c785a02053e-ovn-rundir\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.429090 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf139f21-cf2f-4ef1-9474-9c785a02053e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.431168 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bf139f21-cf2f-4ef1-9474-9c785a02053e-ovs-rundir\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.431300 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5sh\" (UniqueName: \"kubernetes.io/projected/bf139f21-cf2f-4ef1-9474-9c785a02053e-kube-api-access-zq5sh\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.431327 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf139f21-cf2f-4ef1-9474-9c785a02053e-combined-ca-bundle\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.533524 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5sh\" (UniqueName: \"kubernetes.io/projected/bf139f21-cf2f-4ef1-9474-9c785a02053e-kube-api-access-zq5sh\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.533586 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf139f21-cf2f-4ef1-9474-9c785a02053e-combined-ca-bundle\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.533677 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf139f21-cf2f-4ef1-9474-9c785a02053e-config\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.533708 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bf139f21-cf2f-4ef1-9474-9c785a02053e-ovn-rundir\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.533743 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf139f21-cf2f-4ef1-9474-9c785a02053e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.533784 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bf139f21-cf2f-4ef1-9474-9c785a02053e-ovs-rundir\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.534317 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bf139f21-cf2f-4ef1-9474-9c785a02053e-ovs-rundir\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.534905 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bf139f21-cf2f-4ef1-9474-9c785a02053e-ovn-rundir\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.535853 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf139f21-cf2f-4ef1-9474-9c785a02053e-config\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.544487 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf139f21-cf2f-4ef1-9474-9c785a02053e-combined-ca-bundle\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.555362 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5sh\" (UniqueName: \"kubernetes.io/projected/bf139f21-cf2f-4ef1-9474-9c785a02053e-kube-api-access-zq5sh\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.572665 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf139f21-cf2f-4ef1-9474-9c785a02053e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f4khp\" (UID: \"bf139f21-cf2f-4ef1-9474-9c785a02053e\") " pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.636466 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f4khp" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.852017 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-hjkrb"] Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.902538 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kcj9n"] Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.906572 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.921306 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kcj9n"] Dec 01 15:01:43 crc kubenswrapper[4637]: I1201 15:01:43.928692 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.045144 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.045253 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-config\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.045319 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kd5\" (UniqueName: \"kubernetes.io/projected/51824828-e3b1-4d72-9838-8c95fcef21ca-kube-api-access-f9kd5\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.045445 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.147766 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.147895 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-config\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.147967 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kd5\" (UniqueName: \"kubernetes.io/projected/51824828-e3b1-4d72-9838-8c95fcef21ca-kube-api-access-f9kd5\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.147994 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.151243 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-config\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.153540 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.154083 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.176488 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kd5\" (UniqueName: \"kubernetes.io/projected/51824828-e3b1-4d72-9838-8c95fcef21ca-kube-api-access-f9kd5\") pod \"dnsmasq-dns-6bc7876d45-kcj9n\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.254709 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.495685 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.497988 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.501045 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.501415 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.501712 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9nc6n" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.502796 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.511365 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.577987 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.578082 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.578130 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.578218 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfeecd83-4225-4d76-8002-6593dc66ab4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.578588 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.578691 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nd6\" (UniqueName: \"kubernetes.io/projected/bfeecd83-4225-4d76-8002-6593dc66ab4f-kube-api-access-r7nd6\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.578741 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfeecd83-4225-4d76-8002-6593dc66ab4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.578784 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfeecd83-4225-4d76-8002-6593dc66ab4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.682534 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.682643 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.682701 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.682737 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfeecd83-4225-4d76-8002-6593dc66ab4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.682828 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.682940 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7nd6\" (UniqueName: \"kubernetes.io/projected/bfeecd83-4225-4d76-8002-6593dc66ab4f-kube-api-access-r7nd6\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.682970 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.683012 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfeecd83-4225-4d76-8002-6593dc66ab4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.683047 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfeecd83-4225-4d76-8002-6593dc66ab4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.684527 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfeecd83-4225-4d76-8002-6593dc66ab4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.685517 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bfeecd83-4225-4d76-8002-6593dc66ab4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.687977 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfeecd83-4225-4d76-8002-6593dc66ab4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.691001 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.709628 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.714291 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7nd6\" (UniqueName: \"kubernetes.io/projected/bfeecd83-4225-4d76-8002-6593dc66ab4f-kube-api-access-r7nd6\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.721356 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeecd83-4225-4d76-8002-6593dc66ab4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.734156 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bfeecd83-4225-4d76-8002-6593dc66ab4f\") " pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:44 crc kubenswrapper[4637]: I1201 15:01:44.846694 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 01 15:01:45 crc kubenswrapper[4637]: I1201 15:01:45.613995 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:01:45 crc kubenswrapper[4637]: I1201 15:01:45.614329 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:01:45 crc kubenswrapper[4637]: I1201 15:01:45.614371 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:01:45 crc kubenswrapper[4637]: I1201 15:01:45.615253 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e979781fddd064342f7468d039fd5c3c7d452779d2cd7d5b9f3797e85de0bed3"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:01:45 crc kubenswrapper[4637]: I1201 15:01:45.615304 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://e979781fddd064342f7468d039fd5c3c7d452779d2cd7d5b9f3797e85de0bed3" gracePeriod=600 Dec 01 15:01:45 crc kubenswrapper[4637]: I1201 15:01:45.859907 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="e979781fddd064342f7468d039fd5c3c7d452779d2cd7d5b9f3797e85de0bed3" exitCode=0 Dec 01 15:01:45 crc kubenswrapper[4637]: I1201 15:01:45.859968 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"e979781fddd064342f7468d039fd5c3c7d452779d2cd7d5b9f3797e85de0bed3"} Dec 01 15:01:45 crc kubenswrapper[4637]: I1201 15:01:45.860041 4637 scope.go:117] "RemoveContainer" containerID="f441d58a7fd53036d54f051f6c8a3463949b9941e99c7ef5c07b779f2546fa99" Dec 01 15:01:48 crc kubenswrapper[4637]: W1201 15:01:48.627310 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4037f80_7861_4283_99d5_2b078ef3de4b.slice/crio-9fe522eaa0dadc32cd953f69f603677c7a0f9985daebfa00cf75d6d180d773b2 WatchSource:0}: Error finding container 9fe522eaa0dadc32cd953f69f603677c7a0f9985daebfa00cf75d6d180d773b2: Status 404 returned error can't find the container with id 9fe522eaa0dadc32cd953f69f603677c7a0f9985daebfa00cf75d6d180d773b2 Dec 01 15:01:48 crc kubenswrapper[4637]: I1201 15:01:48.898447 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"db3ed190-cdcc-4547-b48f-d09f6e881dfb","Type":"ContainerStarted","Data":"21acb5df6a284ec050ae3d24480934ede21f700b178cccc7bdc86f74ece7cc65"} Dec 01 15:01:48 crc kubenswrapper[4637]: I1201 15:01:48.901762 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9pxbh" event={"ID":"e4037f80-7861-4283-99d5-2b078ef3de4b","Type":"ContainerStarted","Data":"9fe522eaa0dadc32cd953f69f603677c7a0f9985daebfa00cf75d6d180d773b2"} Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.247167 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xsqc9"] Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.251020 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.269427 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsqc9"] Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.363970 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-utilities\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.364306 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4xvw\" (UniqueName: \"kubernetes.io/projected/30f33978-c393-4faf-99c0-b6ce509f1d3f-kube-api-access-n4xvw\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.364555 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-catalog-content\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.466891 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-catalog-content\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.467009 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-utilities\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.467099 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4xvw\" (UniqueName: \"kubernetes.io/projected/30f33978-c393-4faf-99c0-b6ce509f1d3f-kube-api-access-n4xvw\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.467572 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-catalog-content\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.467673 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-utilities\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.486672 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4xvw\" (UniqueName: \"kubernetes.io/projected/30f33978-c393-4faf-99c0-b6ce509f1d3f-kube-api-access-n4xvw\") pod \"redhat-operators-xsqc9\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:57 crc kubenswrapper[4637]: I1201 15:01:57.585664 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:01:59 crc kubenswrapper[4637]: E1201 15:01:59.769533 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 01 15:01:59 crc kubenswrapper[4637]: E1201 15:01:59.770008 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n555h66fhf7h7fh57h695hd5h59fh65fh545h55h654h74h74h675h544hd9h9bh574h665h5bh5d5h6h688h7bh569h98h56fh5dh59bh579h666q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrjvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(5903ea37-db92-4a76-afb4-14cfa23415d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:01:59 crc kubenswrapper[4637]: E1201 15:01:59.779096 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="5903ea37-db92-4a76-afb4-14cfa23415d0" Dec 01 15:01:59 crc kubenswrapper[4637]: I1201 15:01:59.798758 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cjd5c"] Dec 01 15:01:59 crc kubenswrapper[4637]: I1201 15:01:59.800818 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:01:59 crc kubenswrapper[4637]: I1201 15:01:59.848030 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cjd5c"] Dec 01 15:01:59 crc kubenswrapper[4637]: I1201 15:01:59.913974 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-utilities\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:01:59 crc kubenswrapper[4637]: I1201 15:01:59.914197 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-catalog-content\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:01:59 crc kubenswrapper[4637]: I1201 15:01:59.914317 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8xbq\" (UniqueName: \"kubernetes.io/projected/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-kube-api-access-t8xbq\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:00 crc kubenswrapper[4637]: E1201 15:01:59.998971 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="5903ea37-db92-4a76-afb4-14cfa23415d0" Dec 01 15:02:00 crc kubenswrapper[4637]: I1201 15:02:00.015865 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-catalog-content\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:00 crc kubenswrapper[4637]: I1201 15:02:00.015967 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8xbq\" (UniqueName: \"kubernetes.io/projected/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-kube-api-access-t8xbq\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:00 crc kubenswrapper[4637]: I1201 15:02:00.016003 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-utilities\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:00 crc kubenswrapper[4637]: I1201 15:02:00.016478 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-utilities\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:00 crc kubenswrapper[4637]: I1201 15:02:00.016696 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-catalog-content\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:00 crc kubenswrapper[4637]: I1201 15:02:00.069169 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8xbq\" (UniqueName: \"kubernetes.io/projected/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-kube-api-access-t8xbq\") pod \"redhat-marketplace-cjd5c\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:00 crc kubenswrapper[4637]: I1201 15:02:00.138416 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:02 crc kubenswrapper[4637]: I1201 15:02:02.829991 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wm78x"] Dec 01 15:02:02 crc kubenswrapper[4637]: I1201 15:02:02.839251 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:02 crc kubenswrapper[4637]: I1201 15:02:02.845432 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wm78x"] Dec 01 15:02:02 crc kubenswrapper[4637]: I1201 15:02:02.973461 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-catalog-content\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:02 crc kubenswrapper[4637]: I1201 15:02:02.973555 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-utilities\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:02 crc kubenswrapper[4637]: I1201 15:02:02.973886 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmccg\" (UniqueName: \"kubernetes.io/projected/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-kube-api-access-gmccg\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:03 crc kubenswrapper[4637]: I1201 15:02:03.075793 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-catalog-content\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:03 crc kubenswrapper[4637]: I1201 15:02:03.075876 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-utilities\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:03 crc kubenswrapper[4637]: I1201 15:02:03.075939 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmccg\" (UniqueName: \"kubernetes.io/projected/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-kube-api-access-gmccg\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:03 crc kubenswrapper[4637]: I1201 15:02:03.076611 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-catalog-content\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:03 crc kubenswrapper[4637]: I1201 15:02:03.076682 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-utilities\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:03 crc kubenswrapper[4637]: I1201 15:02:03.111338 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmccg\" (UniqueName: \"kubernetes.io/projected/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-kube-api-access-gmccg\") pod \"certified-operators-wm78x\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:03 crc kubenswrapper[4637]: I1201 15:02:03.170280 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:06 crc kubenswrapper[4637]: E1201 15:02:06.656796 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 15:02:06 crc kubenswrapper[4637]: E1201 15:02:06.657494 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t87st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-dtl4w_openstack(b29b2b9a-60f3-4a4c-bcce-c1ab361babe9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:02:06 crc kubenswrapper[4637]: E1201 15:02:06.660147 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" podUID="b29b2b9a-60f3-4a4c-bcce-c1ab361babe9" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.233287 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.234056 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48glk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(3cfe4e59-0e72-4440-b962-2f86664cb2d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.235268 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="3cfe4e59-0e72-4440-b962-2f86664cb2d7" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.253102 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.253316 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h64bhd9h59ch5b4hdh566h5dh659h54dh5f9h65ch64ch684hdbh69h9ch5b9hd5h544h5bfh67bh56ch66bhf9h645h66dh5c7h5c7h57ch5cdhb8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhjb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-dhqng_openstack(8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\": context canceled" logger="UnhandledError" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.254781 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\\\": context canceled\"" pod="openstack/ovn-controller-dhqng" podUID="8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.310978 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.311191 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkkw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8wt54_openstack(25ee40bc-05f5-4401-9fba-b3333df6b27e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.312391 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" podUID="25ee40bc-05f5-4401-9fba-b3333df6b27e" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.339465 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.339629 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97k94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-82qbr_openstack(5d995cbf-5781-40e4-bb41-5f95f541f3c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:02:10 crc kubenswrapper[4637]: E1201 15:02:10.340798 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" podUID="5d995cbf-5781-40e4-bb41-5f95f541f3c9" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.083336 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-dhqng" podUID="8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.083612 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" podUID="25ee40bc-05f5-4401-9fba-b3333df6b27e" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.083655 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="3cfe4e59-0e72-4440-b962-2f86664cb2d7" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.503786 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-sb-db-server/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.503996 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77hd6h57bh64ch66fhf4h5c7h58fh568h9chf9h64hcbh5d9h687hd7h56dh5c9hffh669h655hfbhdh587h56chb8h5bbh5b8h557h554h54fh8fq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm6fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(db3ed190-cdcc-4547-b48f-d09f6e881dfb): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-sb-db-server/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\": context canceled" logger="UnhandledError" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.541142 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.541328 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fcxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(bee806ff-8bec-49d0-a47f-bfd8edbb36fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.542714 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.574435 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.575149 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgmpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(8eeaa55a-2c35-480c-baec-134ef1158e66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.576304 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.580051 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.580234 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzns4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(dae9e33c-c07e-4c13-8104-d1310d91de8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.581411 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="dae9e33c-c07e-4c13-8104-d1310d91de8c" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.615878 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.616046 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pf7px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-hjkrb_openstack(a79142fd-eac0-49db-8c5b-11cce20542ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:02:11 crc kubenswrapper[4637]: E1201 15:02:11.617193 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" podUID="a79142fd-eac0-49db-8c5b-11cce20542ef" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.661628 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.674350 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.773454 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-config\") pod \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\" (UID: \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\") " Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.773541 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc\") pod \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.773584 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-config\") pod \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.773651 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t87st\" (UniqueName: \"kubernetes.io/projected/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-kube-api-access-t87st\") pod \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\" (UID: \"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9\") " Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.773796 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97k94\" (UniqueName: \"kubernetes.io/projected/5d995cbf-5781-40e4-bb41-5f95f541f3c9-kube-api-access-97k94\") pod \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\" (UID: \"5d995cbf-5781-40e4-bb41-5f95f541f3c9\") " Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.774509 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-config" (OuterVolumeSpecName: "config") pod "5d995cbf-5781-40e4-bb41-5f95f541f3c9" (UID: "5d995cbf-5781-40e4-bb41-5f95f541f3c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.774516 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d995cbf-5781-40e4-bb41-5f95f541f3c9" (UID: "5d995cbf-5781-40e4-bb41-5f95f541f3c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.774866 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-config" (OuterVolumeSpecName: "config") pod "b29b2b9a-60f3-4a4c-bcce-c1ab361babe9" (UID: "b29b2b9a-60f3-4a4c-bcce-c1ab361babe9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.779110 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-kube-api-access-t87st" (OuterVolumeSpecName: "kube-api-access-t87st") pod "b29b2b9a-60f3-4a4c-bcce-c1ab361babe9" (UID: "b29b2b9a-60f3-4a4c-bcce-c1ab361babe9"). InnerVolumeSpecName "kube-api-access-t87st". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.782010 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d995cbf-5781-40e4-bb41-5f95f541f3c9-kube-api-access-97k94" (OuterVolumeSpecName: "kube-api-access-97k94") pod "5d995cbf-5781-40e4-bb41-5f95f541f3c9" (UID: "5d995cbf-5781-40e4-bb41-5f95f541f3c9"). InnerVolumeSpecName "kube-api-access-97k94". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.876171 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97k94\" (UniqueName: \"kubernetes.io/projected/5d995cbf-5781-40e4-bb41-5f95f541f3c9-kube-api-access-97k94\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.876206 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.876216 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.876224 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d995cbf-5781-40e4-bb41-5f95f541f3c9-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:11 crc kubenswrapper[4637]: I1201 15:02:11.876236 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t87st\" (UniqueName: \"kubernetes.io/projected/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9-kube-api-access-t87st\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:12 crc kubenswrapper[4637]: I1201 15:02:12.102042 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" Dec 01 15:02:12 crc kubenswrapper[4637]: I1201 15:02:12.102060 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-82qbr" event={"ID":"5d995cbf-5781-40e4-bb41-5f95f541f3c9","Type":"ContainerDied","Data":"6b726ecda5bffdc433606a43b48386e54f62b4362fdaa386e55f8f4795af7c0f"} Dec 01 15:02:12 crc kubenswrapper[4637]: I1201 15:02:12.110146 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" event={"ID":"b29b2b9a-60f3-4a4c-bcce-c1ab361babe9","Type":"ContainerDied","Data":"c3430ac16b5537653d7cd9dffe1dfe0828cdfbba509ef355650d1e98ce4bc5b3"} Dec 01 15:02:12 crc kubenswrapper[4637]: I1201 15:02:12.110348 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dtl4w" Dec 01 15:02:12 crc kubenswrapper[4637]: E1201 15:02:12.115853 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" Dec 01 15:02:12 crc kubenswrapper[4637]: E1201 15:02:12.116946 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="dae9e33c-c07e-4c13-8104-d1310d91de8c" Dec 01 15:02:12 crc kubenswrapper[4637]: E1201 15:02:12.117001 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" Dec 01 15:02:12 crc kubenswrapper[4637]: I1201 15:02:12.251121 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dtl4w"] Dec 01 15:02:12 crc kubenswrapper[4637]: I1201 15:02:12.259308 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dtl4w"] Dec 01 15:02:12 crc kubenswrapper[4637]: I1201 15:02:12.307029 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-82qbr"] Dec 01 15:02:12 crc kubenswrapper[4637]: I1201 15:02:12.324345 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-82qbr"] Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.645620 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.726656 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-dns-svc\") pod \"a79142fd-eac0-49db-8c5b-11cce20542ef\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.727215 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf7px\" (UniqueName: \"kubernetes.io/projected/a79142fd-eac0-49db-8c5b-11cce20542ef-kube-api-access-pf7px\") pod \"a79142fd-eac0-49db-8c5b-11cce20542ef\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.727367 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-config\") pod \"a79142fd-eac0-49db-8c5b-11cce20542ef\" (UID: \"a79142fd-eac0-49db-8c5b-11cce20542ef\") " Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.727581 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a79142fd-eac0-49db-8c5b-11cce20542ef" (UID: "a79142fd-eac0-49db-8c5b-11cce20542ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.729970 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.730800 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-config" (OuterVolumeSpecName: "config") pod "a79142fd-eac0-49db-8c5b-11cce20542ef" (UID: "a79142fd-eac0-49db-8c5b-11cce20542ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.770059 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79142fd-eac0-49db-8c5b-11cce20542ef-kube-api-access-pf7px" (OuterVolumeSpecName: "kube-api-access-pf7px") pod "a79142fd-eac0-49db-8c5b-11cce20542ef" (UID: "a79142fd-eac0-49db-8c5b-11cce20542ef"). InnerVolumeSpecName "kube-api-access-pf7px". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.830515 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d995cbf-5781-40e4-bb41-5f95f541f3c9" path="/var/lib/kubelet/pods/5d995cbf-5781-40e4-bb41-5f95f541f3c9/volumes" Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.831427 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf7px\" (UniqueName: \"kubernetes.io/projected/a79142fd-eac0-49db-8c5b-11cce20542ef-kube-api-access-pf7px\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.831464 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79142fd-eac0-49db-8c5b-11cce20542ef-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:13 crc kubenswrapper[4637]: I1201 15:02:13.831740 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29b2b9a-60f3-4a4c-bcce-c1ab361babe9" path="/var/lib/kubelet/pods/b29b2b9a-60f3-4a4c-bcce-c1ab361babe9/volumes" Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.128794 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"b32cce2c47c067f58e7391d6910b6c3148987eb146b9d1e7fc73d2cd86f483da"} Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.133365 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" event={"ID":"a79142fd-eac0-49db-8c5b-11cce20542ef","Type":"ContainerDied","Data":"ed60ce184f7e230afac3cacb57c28b872ca0bbb301fc1c8b457ea68e159dbd6e"} Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.133483 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-hjkrb" Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.187408 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cjd5c"] Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.207775 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kcj9n"] Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.214433 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-f4khp"] Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.232639 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-hjkrb"] Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.239402 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsqc9"] Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.245035 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-hjkrb"] Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.409702 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wm78x"] Dec 01 15:02:14 crc kubenswrapper[4637]: I1201 15:02:14.472867 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 15:02:14 crc kubenswrapper[4637]: W1201 15:02:14.774351 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod908ecd2f_4feb_4dc1_b8f0_555329bbe3b6.slice/crio-a46a73fd893fdef489e4b1839454cafdd699479a925881a3f0c7db46a3c07ece WatchSource:0}: Error finding container a46a73fd893fdef489e4b1839454cafdd699479a925881a3f0c7db46a3c07ece: Status 404 returned error can't find the container with id a46a73fd893fdef489e4b1839454cafdd699479a925881a3f0c7db46a3c07ece Dec 01 15:02:14 crc kubenswrapper[4637]: W1201 15:02:14.797564 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice/crio-2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10 WatchSource:0}: Error finding container 2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10: Status 404 returned error can't find the container with id 2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10 Dec 01 15:02:15 crc kubenswrapper[4637]: I1201 15:02:15.158966 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm78x" event={"ID":"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6","Type":"ContainerStarted","Data":"a46a73fd893fdef489e4b1839454cafdd699479a925881a3f0c7db46a3c07ece"} Dec 01 15:02:15 crc kubenswrapper[4637]: I1201 15:02:15.161131 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsqc9" event={"ID":"30f33978-c393-4faf-99c0-b6ce509f1d3f","Type":"ContainerStarted","Data":"2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10"} Dec 01 15:02:15 crc kubenswrapper[4637]: I1201 15:02:15.163777 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjd5c" event={"ID":"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3","Type":"ContainerStarted","Data":"35f1706e32e9ce2f309b99a8bf779def9b06e590ce358c609c3467dc25a04f02"} Dec 01 15:02:15 crc kubenswrapper[4637]: I1201 15:02:15.166302 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" event={"ID":"51824828-e3b1-4d72-9838-8c95fcef21ca","Type":"ContainerStarted","Data":"0390bef255dd31b492039a9ff8c040410e55067a64ec73b96f365bf885b3757d"} Dec 01 15:02:15 crc kubenswrapper[4637]: I1201 15:02:15.169033 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f4khp" event={"ID":"bf139f21-cf2f-4ef1-9474-9c785a02053e","Type":"ContainerStarted","Data":"931869992aa68846e5a8f16663fc28d2619884b4c04bebb1d397923fe2c64c06"} Dec 01 15:02:15 crc kubenswrapper[4637]: I1201 15:02:15.176516 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfeecd83-4225-4d76-8002-6593dc66ab4f","Type":"ContainerStarted","Data":"a4312a6694db776485a584767be5d1246b8c53f4f6ea15ba04f8c8a1ac034c4b"} Dec 01 15:02:15 crc kubenswrapper[4637]: I1201 15:02:15.782681 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79142fd-eac0-49db-8c5b-11cce20542ef" path="/var/lib/kubelet/pods/a79142fd-eac0-49db-8c5b-11cce20542ef/volumes" Dec 01 15:02:16 crc kubenswrapper[4637]: I1201 15:02:16.194432 4637 generic.go:334] "Generic (PLEG): container finished" podID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerID="882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db" exitCode=0 Dec 01 15:02:16 crc kubenswrapper[4637]: I1201 15:02:16.194501 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm78x" event={"ID":"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6","Type":"ContainerDied","Data":"882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db"} Dec 01 15:02:16 crc kubenswrapper[4637]: I1201 15:02:16.197860 4637 generic.go:334] "Generic (PLEG): container finished" podID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerID="c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e" exitCode=0 Dec 01 15:02:16 crc kubenswrapper[4637]: I1201 15:02:16.197883 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsqc9" event={"ID":"30f33978-c393-4faf-99c0-b6ce509f1d3f","Type":"ContainerDied","Data":"c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e"} Dec 01 15:02:18 crc kubenswrapper[4637]: I1201 15:02:18.218172 4637 generic.go:334] "Generic (PLEG): container finished" podID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerID="097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6" exitCode=0 Dec 01 15:02:18 crc kubenswrapper[4637]: I1201 15:02:18.218232 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjd5c" event={"ID":"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3","Type":"ContainerDied","Data":"097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6"} Dec 01 15:02:18 crc kubenswrapper[4637]: E1201 15:02:18.988631 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-sb-db-server/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\\\": context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="db3ed190-cdcc-4547-b48f-d09f6e881dfb" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.229524 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f4khp" event={"ID":"bf139f21-cf2f-4ef1-9474-9c785a02053e","Type":"ContainerStarted","Data":"433a51598960f50f88396118d0b5556168ffc835903c2344de897c8ef6b8152c"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.231553 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"db3ed190-cdcc-4547-b48f-d09f6e881dfb","Type":"ContainerStarted","Data":"3247a8436dc3a415adc320833f670542807eac7af66303650e8650df60f478f0"} Dec 01 15:02:19 crc kubenswrapper[4637]: E1201 15:02:19.232610 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="db3ed190-cdcc-4547-b48f-d09f6e881dfb" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.233204 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5903ea37-db92-4a76-afb4-14cfa23415d0","Type":"ContainerStarted","Data":"c37f33e3452fabe0730f7b2b96446632491bfe666dd4a3007865e4b62a4c972c"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.233489 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.235286 4637 generic.go:334] "Generic (PLEG): container finished" podID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerID="f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548" exitCode=0 Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.235364 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm78x" event={"ID":"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6","Type":"ContainerDied","Data":"f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.237089 4637 generic.go:334] "Generic (PLEG): container finished" podID="e4037f80-7861-4283-99d5-2b078ef3de4b" containerID="fed3251bc944602ab5d8fa7fcb0fb418106ebde4bcec974063b790e045910cb7" exitCode=0 Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.237146 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9pxbh" event={"ID":"e4037f80-7861-4283-99d5-2b078ef3de4b","Type":"ContainerDied","Data":"fed3251bc944602ab5d8fa7fcb0fb418106ebde4bcec974063b790e045910cb7"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.241981 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsqc9" event={"ID":"30f33978-c393-4faf-99c0-b6ce509f1d3f","Type":"ContainerStarted","Data":"6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.248852 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2a7716a-d336-4d98-97c6-32b6c8b18e28","Type":"ContainerStarted","Data":"c01e34ee19b6ccfceeeb1bb9b60609966c84c1124dac3600bb258bdfb2b8cfd3"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.249154 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.254268 4637 generic.go:334] "Generic (PLEG): container finished" podID="51824828-e3b1-4d72-9838-8c95fcef21ca" containerID="7731d1be2369c158d1f37db88822aa5ae4280339c5b5c5558c3e9c268bf63ffc" exitCode=0 Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.254426 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" event={"ID":"51824828-e3b1-4d72-9838-8c95fcef21ca","Type":"ContainerDied","Data":"7731d1be2369c158d1f37db88822aa5ae4280339c5b5c5558c3e9c268bf63ffc"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.257493 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfeecd83-4225-4d76-8002-6593dc66ab4f","Type":"ContainerStarted","Data":"0222723cd0db409f7dd2bdb48b5144ab4a815489e4c22240a4a57abc678758bb"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.257538 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bfeecd83-4225-4d76-8002-6593dc66ab4f","Type":"ContainerStarted","Data":"5adf5d7f90938e7390efb515b0a03108847c85359ca3391dfb99d25e2181b7ae"} Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.268999 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-f4khp" podStartSLOduration=32.384640069 podStartE2EDuration="36.268977841s" podCreationTimestamp="2025-12-01 15:01:43 +0000 UTC" firstStartedPulling="2025-12-01 15:02:14.671571896 +0000 UTC m=+985.189280724" lastFinishedPulling="2025-12-01 15:02:18.555909668 +0000 UTC m=+989.073618496" observedRunningTime="2025-12-01 15:02:19.261152919 +0000 UTC m=+989.778861747" watchObservedRunningTime="2025-12-01 15:02:19.268977841 +0000 UTC m=+989.786686669" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.521539 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=5.952675711 podStartE2EDuration="45.521402051s" podCreationTimestamp="2025-12-01 15:01:34 +0000 UTC" firstStartedPulling="2025-12-01 15:01:36.061196567 +0000 UTC m=+946.578905395" lastFinishedPulling="2025-12-01 15:02:15.629922917 +0000 UTC m=+986.147631735" observedRunningTime="2025-12-01 15:02:19.512539041 +0000 UTC m=+990.030247859" watchObservedRunningTime="2025-12-01 15:02:19.521402051 +0000 UTC m=+990.039110879" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.564658 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.35774583 podStartE2EDuration="43.564624703s" podCreationTimestamp="2025-12-01 15:01:36 +0000 UTC" firstStartedPulling="2025-12-01 15:01:38.297218811 +0000 UTC m=+948.814927639" lastFinishedPulling="2025-12-01 15:02:18.504097684 +0000 UTC m=+989.021806512" observedRunningTime="2025-12-01 15:02:19.557606192 +0000 UTC m=+990.075315020" watchObservedRunningTime="2025-12-01 15:02:19.564624703 +0000 UTC m=+990.082333531" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.684732 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=32.911472067 podStartE2EDuration="36.684703317s" podCreationTimestamp="2025-12-01 15:01:43 +0000 UTC" firstStartedPulling="2025-12-01 15:02:14.780348285 +0000 UTC m=+985.298057113" lastFinishedPulling="2025-12-01 15:02:18.553579535 +0000 UTC m=+989.071288363" observedRunningTime="2025-12-01 15:02:19.667515291 +0000 UTC m=+990.185224119" watchObservedRunningTime="2025-12-01 15:02:19.684703317 +0000 UTC m=+990.202412145" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.811717 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8wt54"] Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.854111 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.863005 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-8m9js"] Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.864328 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.874834 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8m9js"] Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.874982 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.975827 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.975981 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-dns-svc\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.976019 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.976042 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssb5j\" (UniqueName: \"kubernetes.io/projected/56ed39eb-94a6-49c5-acc8-d30f60c654a9-kube-api-access-ssb5j\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:19 crc kubenswrapper[4637]: I1201 15:02:19.976075 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-config\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.077230 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.077332 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-dns-svc\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.077377 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.078494 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.077401 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssb5j\" (UniqueName: \"kubernetes.io/projected/56ed39eb-94a6-49c5-acc8-d30f60c654a9-kube-api-access-ssb5j\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.078517 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-dns-svc\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.078520 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.078682 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-config\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.079447 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-config\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.118856 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssb5j\" (UniqueName: \"kubernetes.io/projected/56ed39eb-94a6-49c5-acc8-d30f60c654a9-kube-api-access-ssb5j\") pod \"dnsmasq-dns-8554648995-8m9js\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.188677 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.292346 4637 generic.go:334] "Generic (PLEG): container finished" podID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerID="6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5" exitCode=0 Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.293435 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsqc9" event={"ID":"30f33978-c393-4faf-99c0-b6ce509f1d3f","Type":"ContainerDied","Data":"6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5"} Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.306306 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjd5c" event={"ID":"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3","Type":"ContainerStarted","Data":"44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e"} Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.327351 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" event={"ID":"51824828-e3b1-4d72-9838-8c95fcef21ca","Type":"ContainerStarted","Data":"05971e9dafbf32c18a52e0e88cc92e35e99dddea0f151f9d4922e738e3b45727"} Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.328170 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:02:20 crc kubenswrapper[4637]: E1201 15:02:20.398667 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="db3ed190-cdcc-4547-b48f-d09f6e881dfb" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.419568 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" podStartSLOduration=34.549403374 podStartE2EDuration="37.419549181s" podCreationTimestamp="2025-12-01 15:01:43 +0000 UTC" firstStartedPulling="2025-12-01 15:02:14.671218317 +0000 UTC m=+985.188927145" lastFinishedPulling="2025-12-01 15:02:17.541364124 +0000 UTC m=+988.059072952" observedRunningTime="2025-12-01 15:02:20.416470756 +0000 UTC m=+990.934179594" watchObservedRunningTime="2025-12-01 15:02:20.419549181 +0000 UTC m=+990.937258009" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.564436 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.693073 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-config\") pod \"25ee40bc-05f5-4401-9fba-b3333df6b27e\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.693629 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkkw9\" (UniqueName: \"kubernetes.io/projected/25ee40bc-05f5-4401-9fba-b3333df6b27e-kube-api-access-tkkw9\") pod \"25ee40bc-05f5-4401-9fba-b3333df6b27e\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.693678 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-dns-svc\") pod \"25ee40bc-05f5-4401-9fba-b3333df6b27e\" (UID: \"25ee40bc-05f5-4401-9fba-b3333df6b27e\") " Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.694759 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25ee40bc-05f5-4401-9fba-b3333df6b27e" (UID: "25ee40bc-05f5-4401-9fba-b3333df6b27e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.695113 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-config" (OuterVolumeSpecName: "config") pod "25ee40bc-05f5-4401-9fba-b3333df6b27e" (UID: "25ee40bc-05f5-4401-9fba-b3333df6b27e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.702288 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ee40bc-05f5-4401-9fba-b3333df6b27e-kube-api-access-tkkw9" (OuterVolumeSpecName: "kube-api-access-tkkw9") pod "25ee40bc-05f5-4401-9fba-b3333df6b27e" (UID: "25ee40bc-05f5-4401-9fba-b3333df6b27e"). InnerVolumeSpecName "kube-api-access-tkkw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.802780 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.802810 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkkw9\" (UniqueName: \"kubernetes.io/projected/25ee40bc-05f5-4401-9fba-b3333df6b27e-kube-api-access-tkkw9\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.802819 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25ee40bc-05f5-4401-9fba-b3333df6b27e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:20 crc kubenswrapper[4637]: I1201 15:02:20.847195 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.309727 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8m9js"] Dec 01 15:02:21 crc kubenswrapper[4637]: W1201 15:02:21.311839 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56ed39eb_94a6_49c5_acc8_d30f60c654a9.slice/crio-247780f1401e1aaeb9893a38774cb872755d7756326cfa1410cd92aac53efb28 WatchSource:0}: Error finding container 247780f1401e1aaeb9893a38774cb872755d7756326cfa1410cd92aac53efb28: Status 404 returned error can't find the container with id 247780f1401e1aaeb9893a38774cb872755d7756326cfa1410cd92aac53efb28 Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.333738 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8m9js" event={"ID":"56ed39eb-94a6-49c5-acc8-d30f60c654a9","Type":"ContainerStarted","Data":"247780f1401e1aaeb9893a38774cb872755d7756326cfa1410cd92aac53efb28"} Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.335586 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm78x" event={"ID":"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6","Type":"ContainerStarted","Data":"cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99"} Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.347389 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" event={"ID":"25ee40bc-05f5-4401-9fba-b3333df6b27e","Type":"ContainerDied","Data":"52282add689840593950d861db13a1fe1c82ca9f133a01452632202ba6d23b2e"} Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.347474 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8wt54" Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.367126 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9pxbh" event={"ID":"e4037f80-7861-4283-99d5-2b078ef3de4b","Type":"ContainerStarted","Data":"767d47a2231b58589d7d1b0b0289c7fc099cbf82a91b4a40fc1f6ca119c709e0"} Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.367162 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9pxbh" event={"ID":"e4037f80-7861-4283-99d5-2b078ef3de4b","Type":"ContainerStarted","Data":"dff9cb39c36a9d26aa0bff2e9e9c8e799824c4e79af733369203441f20e65908"} Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.367275 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.376237 4637 generic.go:334] "Generic (PLEG): container finished" podID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerID="44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e" exitCode=0 Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.377838 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjd5c" event={"ID":"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3","Type":"ContainerDied","Data":"44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e"} Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.408193 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9pxbh" podStartSLOduration=14.386223214 podStartE2EDuration="40.408174381s" podCreationTimestamp="2025-12-01 15:01:41 +0000 UTC" firstStartedPulling="2025-12-01 15:01:48.634885371 +0000 UTC m=+959.152594199" lastFinishedPulling="2025-12-01 15:02:14.656836538 +0000 UTC m=+985.174545366" observedRunningTime="2025-12-01 15:02:21.403793642 +0000 UTC m=+991.921502470" watchObservedRunningTime="2025-12-01 15:02:21.408174381 +0000 UTC m=+991.925883209" Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.412056 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wm78x" podStartSLOduration=15.81209137 podStartE2EDuration="19.412045455s" podCreationTimestamp="2025-12-01 15:02:02 +0000 UTC" firstStartedPulling="2025-12-01 15:02:17.153816632 +0000 UTC m=+987.671525460" lastFinishedPulling="2025-12-01 15:02:20.753770717 +0000 UTC m=+991.271479545" observedRunningTime="2025-12-01 15:02:21.359741818 +0000 UTC m=+991.877450666" watchObservedRunningTime="2025-12-01 15:02:21.412045455 +0000 UTC m=+991.929754283" Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.491092 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8wt54"] Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.497025 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8wt54"] Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.780360 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ee40bc-05f5-4401-9fba-b3333df6b27e" path="/var/lib/kubelet/pods/25ee40bc-05f5-4401-9fba-b3333df6b27e/volumes" Dec 01 15:02:21 crc kubenswrapper[4637]: I1201 15:02:21.820199 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:02:22 crc kubenswrapper[4637]: I1201 15:02:22.386452 4637 generic.go:334] "Generic (PLEG): container finished" podID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" containerID="a8ff9ca46caa1217a93b1fa62d864a1cb8b63a05f94a2aae60698ed0e516aba5" exitCode=0 Dec 01 15:02:22 crc kubenswrapper[4637]: I1201 15:02:22.387802 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8m9js" event={"ID":"56ed39eb-94a6-49c5-acc8-d30f60c654a9","Type":"ContainerDied","Data":"a8ff9ca46caa1217a93b1fa62d864a1cb8b63a05f94a2aae60698ed0e516aba5"} Dec 01 15:02:23 crc kubenswrapper[4637]: I1201 15:02:23.171411 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:23 crc kubenswrapper[4637]: I1201 15:02:23.171869 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:23 crc kubenswrapper[4637]: I1201 15:02:23.237803 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:24 crc kubenswrapper[4637]: I1201 15:02:24.080348 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 01 15:02:24 crc kubenswrapper[4637]: I1201 15:02:24.125480 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 01 15:02:24 crc kubenswrapper[4637]: I1201 15:02:24.420184 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8m9js" event={"ID":"56ed39eb-94a6-49c5-acc8-d30f60c654a9","Type":"ContainerStarted","Data":"d7699f53793b9c9189fc0260a665bca194e4f005d00ec449c0cc1f10ce4b02f0"} Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.146144 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.430636 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dae9e33c-c07e-4c13-8104-d1310d91de8c","Type":"ContainerStarted","Data":"2bdc7a2c3c0d02b0e8fe316351373c849237c2d6b4831808bf83e5f1ebb861bf"} Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.432682 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3cfe4e59-0e72-4440-b962-2f86664cb2d7","Type":"ContainerStarted","Data":"381b861726722ab41c9e9bca5d13d7a1f5a02160c4cb36c7421509c791282e51"} Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.437030 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsqc9" event={"ID":"30f33978-c393-4faf-99c0-b6ce509f1d3f","Type":"ContainerStarted","Data":"2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b"} Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.440630 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjd5c" event={"ID":"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3","Type":"ContainerStarted","Data":"631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956"} Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.440686 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.492614 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xsqc9" podStartSLOduration=21.2216469 podStartE2EDuration="28.492588454s" podCreationTimestamp="2025-12-01 15:01:57 +0000 UTC" firstStartedPulling="2025-12-01 15:02:17.153847383 +0000 UTC m=+987.671556211" lastFinishedPulling="2025-12-01 15:02:24.424788937 +0000 UTC m=+994.942497765" observedRunningTime="2025-12-01 15:02:25.479375405 +0000 UTC m=+995.997084233" watchObservedRunningTime="2025-12-01 15:02:25.492588454 +0000 UTC m=+996.010297282" Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.565973 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-8m9js" podStartSLOduration=6.565954982 podStartE2EDuration="6.565954982s" podCreationTimestamp="2025-12-01 15:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:02:25.563519986 +0000 UTC m=+996.081228824" watchObservedRunningTime="2025-12-01 15:02:25.565954982 +0000 UTC m=+996.083663800" Dec 01 15:02:25 crc kubenswrapper[4637]: I1201 15:02:25.599974 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cjd5c" podStartSLOduration=20.553882803 podStartE2EDuration="26.599948344s" podCreationTimestamp="2025-12-01 15:01:59 +0000 UTC" firstStartedPulling="2025-12-01 15:02:18.435599048 +0000 UTC m=+988.953307876" lastFinishedPulling="2025-12-01 15:02:24.481664589 +0000 UTC m=+994.999373417" observedRunningTime="2025-12-01 15:02:25.595552404 +0000 UTC m=+996.113261242" watchObservedRunningTime="2025-12-01 15:02:25.599948344 +0000 UTC m=+996.117657172" Dec 01 15:02:26 crc kubenswrapper[4637]: I1201 15:02:26.449609 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8eeaa55a-2c35-480c-baec-134ef1158e66","Type":"ContainerStarted","Data":"8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf"} Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.320484 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.545277 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kcj9n"] Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.545786 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" podUID="51824828-e3b1-4d72-9838-8c95fcef21ca" containerName="dnsmasq-dns" containerID="cri-o://05971e9dafbf32c18a52e0e88cc92e35e99dddea0f151f9d4922e738e3b45727" gracePeriod=10 Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.550210 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.586142 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.586224 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.705709 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5qcft"] Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.707331 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.760725 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5qcft"] Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.778672 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-config\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.778753 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.778818 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.778842 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.778947 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mfk7\" (UniqueName: \"kubernetes.io/projected/729b2ee4-c622-4dea-86b7-94fffda9ed7f-kube-api-access-6mfk7\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.880545 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.880591 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.882954 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.885219 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mfk7\" (UniqueName: \"kubernetes.io/projected/729b2ee4-c622-4dea-86b7-94fffda9ed7f-kube-api-access-6mfk7\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.885307 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-config\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.885377 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.886598 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.886835 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.886971 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-config\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:27 crc kubenswrapper[4637]: I1201 15:02:27.930206 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mfk7\" (UniqueName: \"kubernetes.io/projected/729b2ee4-c622-4dea-86b7-94fffda9ed7f-kube-api-access-6mfk7\") pod \"dnsmasq-dns-b8fbc5445-5qcft\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.034740 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.527163 4637 generic.go:334] "Generic (PLEG): container finished" podID="51824828-e3b1-4d72-9838-8c95fcef21ca" containerID="05971e9dafbf32c18a52e0e88cc92e35e99dddea0f151f9d4922e738e3b45727" exitCode=0 Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.527351 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" event={"ID":"51824828-e3b1-4d72-9838-8c95fcef21ca","Type":"ContainerDied","Data":"05971e9dafbf32c18a52e0e88cc92e35e99dddea0f151f9d4922e738e3b45727"} Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.656051 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.725313 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 01 15:02:28 crc kubenswrapper[4637]: E1201 15:02:28.725895 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51824828-e3b1-4d72-9838-8c95fcef21ca" containerName="init" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.725910 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="51824828-e3b1-4d72-9838-8c95fcef21ca" containerName="init" Dec 01 15:02:28 crc kubenswrapper[4637]: E1201 15:02:28.725945 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51824828-e3b1-4d72-9838-8c95fcef21ca" containerName="dnsmasq-dns" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.725953 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="51824828-e3b1-4d72-9838-8c95fcef21ca" containerName="dnsmasq-dns" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.726163 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="51824828-e3b1-4d72-9838-8c95fcef21ca" containerName="dnsmasq-dns" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.770049 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.778322 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xsqc9" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="registry-server" probeResult="failure" output=< Dec 01 15:02:28 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:02:28 crc kubenswrapper[4637]: > Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.778972 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.779288 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dwns5" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.779414 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.779517 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.780167 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.804157 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-dns-svc\") pod \"51824828-e3b1-4d72-9838-8c95fcef21ca\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.804259 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-ovsdbserver-sb\") pod \"51824828-e3b1-4d72-9838-8c95fcef21ca\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.804430 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-config\") pod \"51824828-e3b1-4d72-9838-8c95fcef21ca\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.804565 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9kd5\" (UniqueName: \"kubernetes.io/projected/51824828-e3b1-4d72-9838-8c95fcef21ca-kube-api-access-f9kd5\") pod \"51824828-e3b1-4d72-9838-8c95fcef21ca\" (UID: \"51824828-e3b1-4d72-9838-8c95fcef21ca\") " Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.849339 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51824828-e3b1-4d72-9838-8c95fcef21ca-kube-api-access-f9kd5" (OuterVolumeSpecName: "kube-api-access-f9kd5") pod "51824828-e3b1-4d72-9838-8c95fcef21ca" (UID: "51824828-e3b1-4d72-9838-8c95fcef21ca"). InnerVolumeSpecName "kube-api-access-f9kd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.901686 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51824828-e3b1-4d72-9838-8c95fcef21ca" (UID: "51824828-e3b1-4d72-9838-8c95fcef21ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.907004 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbfv\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-kube-api-access-7hbfv\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.907068 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.907184 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.907314 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2fd7aa8-b5cb-4c3c-976c-210541a77440-cache\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.907348 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2fd7aa8-b5cb-4c3c-976c-210541a77440-lock\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.907394 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.907405 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9kd5\" (UniqueName: \"kubernetes.io/projected/51824828-e3b1-4d72-9838-8c95fcef21ca-kube-api-access-f9kd5\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.943699 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51824828-e3b1-4d72-9838-8c95fcef21ca" (UID: "51824828-e3b1-4d72-9838-8c95fcef21ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:28 crc kubenswrapper[4637]: I1201 15:02:28.945507 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-config" (OuterVolumeSpecName: "config") pod "51824828-e3b1-4d72-9838-8c95fcef21ca" (UID: "51824828-e3b1-4d72-9838-8c95fcef21ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.009147 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.009313 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2fd7aa8-b5cb-4c3c-976c-210541a77440-cache\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.009336 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2fd7aa8-b5cb-4c3c-976c-210541a77440-lock\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.009627 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.009900 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2fd7aa8-b5cb-4c3c-976c-210541a77440-cache\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.009972 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbfv\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-kube-api-access-7hbfv\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.009998 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.010238 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2fd7aa8-b5cb-4c3c-976c-210541a77440-lock\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: E1201 15:02:29.010970 4637 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 15:02:29 crc kubenswrapper[4637]: E1201 15:02:29.019328 4637 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.011392 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5qcft"] Dec 01 15:02:29 crc kubenswrapper[4637]: E1201 15:02:29.019743 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift podName:b2fd7aa8-b5cb-4c3c-976c-210541a77440 nodeName:}" failed. No retries permitted until 2025-12-01 15:02:29.519686244 +0000 UTC m=+1000.037395072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift") pod "swift-storage-0" (UID: "b2fd7aa8-b5cb-4c3c-976c-210541a77440") : configmap "swift-ring-files" not found Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.020024 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.020084 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51824828-e3b1-4d72-9838-8c95fcef21ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.078480 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbfv\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-kube-api-access-7hbfv\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.139219 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.535176 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:29 crc kubenswrapper[4637]: E1201 15:02:29.535543 4637 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 15:02:29 crc kubenswrapper[4637]: E1201 15:02:29.535576 4637 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 15:02:29 crc kubenswrapper[4637]: E1201 15:02:29.535676 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift podName:b2fd7aa8-b5cb-4c3c-976c-210541a77440 nodeName:}" failed. No retries permitted until 2025-12-01 15:02:30.535650836 +0000 UTC m=+1001.053359664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift") pod "swift-storage-0" (UID: "b2fd7aa8-b5cb-4c3c-976c-210541a77440") : configmap "swift-ring-files" not found Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.537230 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" event={"ID":"729b2ee4-c622-4dea-86b7-94fffda9ed7f","Type":"ContainerStarted","Data":"a3036bef201de54135164f9572c8ddfdb976c8c9cd05fa1b9bbe382462a52230"} Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.541769 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" event={"ID":"51824828-e3b1-4d72-9838-8c95fcef21ca","Type":"ContainerDied","Data":"0390bef255dd31b492039a9ff8c040410e55067a64ec73b96f365bf885b3757d"} Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.541889 4637 scope.go:117] "RemoveContainer" containerID="05971e9dafbf32c18a52e0e88cc92e35e99dddea0f151f9d4922e738e3b45727" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.541836 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kcj9n" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.544155 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dhqng" event={"ID":"8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e","Type":"ContainerStarted","Data":"34281251704cc149923cb8d16fa96a06c9918b995e0a1ccf17653e31481d623d"} Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.544470 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dhqng" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.582479 4637 scope.go:117] "RemoveContainer" containerID="7731d1be2369c158d1f37db88822aa5ae4280339c5b5c5558c3e9c268bf63ffc" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.590389 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dhqng" podStartSLOduration=2.738410773 podStartE2EDuration="48.590353589s" podCreationTimestamp="2025-12-01 15:01:41 +0000 UTC" firstStartedPulling="2025-12-01 15:01:42.556894705 +0000 UTC m=+953.074603533" lastFinishedPulling="2025-12-01 15:02:28.408837521 +0000 UTC m=+998.926546349" observedRunningTime="2025-12-01 15:02:29.57454769 +0000 UTC m=+1000.092256518" watchObservedRunningTime="2025-12-01 15:02:29.590353589 +0000 UTC m=+1000.108062417" Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.617189 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kcj9n"] Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.621602 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kcj9n"] Dec 01 15:02:29 crc kubenswrapper[4637]: I1201 15:02:29.789837 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51824828-e3b1-4d72-9838-8c95fcef21ca" path="/var/lib/kubelet/pods/51824828-e3b1-4d72-9838-8c95fcef21ca/volumes" Dec 01 15:02:30 crc kubenswrapper[4637]: I1201 15:02:30.138803 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:30 crc kubenswrapper[4637]: I1201 15:02:30.139178 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:30 crc kubenswrapper[4637]: I1201 15:02:30.192178 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:30 crc kubenswrapper[4637]: I1201 15:02:30.556361 4637 generic.go:334] "Generic (PLEG): container finished" podID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" containerID="50acc7c1ecc7f56bbe660aed7e1997c6e1db4e496a8fd1818bc3c852d5afa506" exitCode=0 Dec 01 15:02:30 crc kubenswrapper[4637]: I1201 15:02:30.556440 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" event={"ID":"729b2ee4-c622-4dea-86b7-94fffda9ed7f","Type":"ContainerDied","Data":"50acc7c1ecc7f56bbe660aed7e1997c6e1db4e496a8fd1818bc3c852d5afa506"} Dec 01 15:02:30 crc kubenswrapper[4637]: I1201 15:02:30.561641 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bee806ff-8bec-49d0-a47f-bfd8edbb36fb","Type":"ContainerStarted","Data":"af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f"} Dec 01 15:02:30 crc kubenswrapper[4637]: I1201 15:02:30.563968 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:30 crc kubenswrapper[4637]: E1201 15:02:30.564209 4637 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 15:02:30 crc kubenswrapper[4637]: E1201 15:02:30.564227 4637 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 15:02:30 crc kubenswrapper[4637]: E1201 15:02:30.564268 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift podName:b2fd7aa8-b5cb-4c3c-976c-210541a77440 nodeName:}" failed. No retries permitted until 2025-12-01 15:02:32.56425281 +0000 UTC m=+1003.081961638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift") pod "swift-storage-0" (UID: "b2fd7aa8-b5cb-4c3c-976c-210541a77440") : configmap "swift-ring-files" not found Dec 01 15:02:31 crc kubenswrapper[4637]: I1201 15:02:31.213338 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cjd5c" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="registry-server" probeResult="failure" output=< Dec 01 15:02:31 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:02:31 crc kubenswrapper[4637]: > Dec 01 15:02:31 crc kubenswrapper[4637]: I1201 15:02:31.571468 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" event={"ID":"729b2ee4-c622-4dea-86b7-94fffda9ed7f","Type":"ContainerStarted","Data":"6654a325edd98e1a49035ff61887354a439bbee24230e33ffa8e4b5152e71caa"} Dec 01 15:02:31 crc kubenswrapper[4637]: I1201 15:02:31.571905 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:31 crc kubenswrapper[4637]: I1201 15:02:31.603665 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" podStartSLOduration=4.603647596 podStartE2EDuration="4.603647596s" podCreationTimestamp="2025-12-01 15:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:02:31.600880791 +0000 UTC m=+1002.118589619" watchObservedRunningTime="2025-12-01 15:02:31.603647596 +0000 UTC m=+1002.121356424" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.402808 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vnnqf"] Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.404201 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.407432 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.407906 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.408447 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.428262 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vnnqf"] Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.513815 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9zn4\" (UniqueName: \"kubernetes.io/projected/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-kube-api-access-m9zn4\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.513891 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-dispersionconf\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.513968 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-swiftconf\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.513998 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-scripts\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.514032 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-combined-ca-bundle\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.514478 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-etc-swift\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.514888 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-ring-data-devices\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.617036 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-swiftconf\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.617361 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-scripts\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.617483 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-combined-ca-bundle\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.617592 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:32 crc kubenswrapper[4637]: E1201 15:02:32.617720 4637 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 15:02:32 crc kubenswrapper[4637]: E1201 15:02:32.617759 4637 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 15:02:32 crc kubenswrapper[4637]: E1201 15:02:32.617828 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift podName:b2fd7aa8-b5cb-4c3c-976c-210541a77440 nodeName:}" failed. No retries permitted until 2025-12-01 15:02:36.617803869 +0000 UTC m=+1007.135512757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift") pod "swift-storage-0" (UID: "b2fd7aa8-b5cb-4c3c-976c-210541a77440") : configmap "swift-ring-files" not found Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.617730 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-etc-swift\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.618120 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-scripts\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.618269 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-ring-data-devices\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.618353 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9zn4\" (UniqueName: \"kubernetes.io/projected/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-kube-api-access-m9zn4\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.618457 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-dispersionconf\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.618626 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-etc-swift\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.618791 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-ring-data-devices\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.624411 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-dispersionconf\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.624828 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-combined-ca-bundle\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.625576 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-swiftconf\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.646224 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9zn4\" (UniqueName: \"kubernetes.io/projected/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-kube-api-access-m9zn4\") pod \"swift-ring-rebalance-vnnqf\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:32 crc kubenswrapper[4637]: I1201 15:02:32.728402 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.288369 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vnnqf"] Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.335007 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.399161 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wm78x"] Dec 01 15:02:33 crc kubenswrapper[4637]: W1201 15:02:33.567823 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod911910d8_9f6c_4c97_b4b9_1bf0a5ae8a51.slice/crio-5487e7b031e7891d55b24f1fa5b21220a48d480312d81398feecec944592e0fb WatchSource:0}: Error finding container 5487e7b031e7891d55b24f1fa5b21220a48d480312d81398feecec944592e0fb: Status 404 returned error can't find the container with id 5487e7b031e7891d55b24f1fa5b21220a48d480312d81398feecec944592e0fb Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.589598 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vnnqf" event={"ID":"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51","Type":"ContainerStarted","Data":"5487e7b031e7891d55b24f1fa5b21220a48d480312d81398feecec944592e0fb"} Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.591177 4637 generic.go:334] "Generic (PLEG): container finished" podID="dae9e33c-c07e-4c13-8104-d1310d91de8c" containerID="2bdc7a2c3c0d02b0e8fe316351373c849237c2d6b4831808bf83e5f1ebb861bf" exitCode=0 Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.591231 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dae9e33c-c07e-4c13-8104-d1310d91de8c","Type":"ContainerDied","Data":"2bdc7a2c3c0d02b0e8fe316351373c849237c2d6b4831808bf83e5f1ebb861bf"} Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.600617 4637 generic.go:334] "Generic (PLEG): container finished" podID="3cfe4e59-0e72-4440-b962-2f86664cb2d7" containerID="381b861726722ab41c9e9bca5d13d7a1f5a02160c4cb36c7421509c791282e51" exitCode=0 Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.600984 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wm78x" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerName="registry-server" containerID="cri-o://cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99" gracePeriod=2 Dec 01 15:02:33 crc kubenswrapper[4637]: I1201 15:02:33.601064 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3cfe4e59-0e72-4440-b962-2f86664cb2d7","Type":"ContainerDied","Data":"381b861726722ab41c9e9bca5d13d7a1f5a02160c4cb36c7421509c791282e51"} Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.531489 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.616275 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"db3ed190-cdcc-4547-b48f-d09f6e881dfb","Type":"ContainerStarted","Data":"a8345b4d9b03942c6c9d2ac02d420d4df24941dab8f4dc8a0b2641226ea338ac"} Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.625113 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dae9e33c-c07e-4c13-8104-d1310d91de8c","Type":"ContainerStarted","Data":"ab262fe208559b440e0147ca517ef046986ade543209f131f86be552bdca4020"} Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.629010 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3cfe4e59-0e72-4440-b962-2f86664cb2d7","Type":"ContainerStarted","Data":"0bf6546114ca9a295189e10f8b6beb43fd9d1d555531a2b60672b5841b4a1659"} Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.632435 4637 generic.go:334] "Generic (PLEG): container finished" podID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerID="cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99" exitCode=0 Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.632467 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm78x" event={"ID":"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6","Type":"ContainerDied","Data":"cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99"} Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.632487 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm78x" event={"ID":"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6","Type":"ContainerDied","Data":"a46a73fd893fdef489e4b1839454cafdd699479a925881a3f0c7db46a3c07ece"} Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.632504 4637 scope.go:117] "RemoveContainer" containerID="cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.632605 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm78x" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.656071 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.686466478 podStartE2EDuration="54.656051444s" podCreationTimestamp="2025-12-01 15:01:40 +0000 UTC" firstStartedPulling="2025-12-01 15:01:48.637999475 +0000 UTC m=+959.155708303" lastFinishedPulling="2025-12-01 15:02:33.607584441 +0000 UTC m=+1004.125293269" observedRunningTime="2025-12-01 15:02:34.653278789 +0000 UTC m=+1005.170987617" watchObservedRunningTime="2025-12-01 15:02:34.656051444 +0000 UTC m=+1005.173760272" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.694181 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmccg\" (UniqueName: \"kubernetes.io/projected/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-kube-api-access-gmccg\") pod \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.694376 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-catalog-content\") pod \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.694424 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-utilities\") pod \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\" (UID: \"908ecd2f-4feb-4dc1-b8f0-555329bbe3b6\") " Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.700373 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-utilities" (OuterVolumeSpecName: "utilities") pod "908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" (UID: "908ecd2f-4feb-4dc1-b8f0-555329bbe3b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.703439 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-kube-api-access-gmccg" (OuterVolumeSpecName: "kube-api-access-gmccg") pod "908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" (UID: "908ecd2f-4feb-4dc1-b8f0-555329bbe3b6"). InnerVolumeSpecName "kube-api-access-gmccg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.705014 4637 scope.go:117] "RemoveContainer" containerID="f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.708459 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.035531321 podStartE2EDuration="1m1.708386962s" podCreationTimestamp="2025-12-01 15:01:33 +0000 UTC" firstStartedPulling="2025-12-01 15:01:35.80078257 +0000 UTC m=+946.318491398" lastFinishedPulling="2025-12-01 15:02:24.473638211 +0000 UTC m=+994.991347039" observedRunningTime="2025-12-01 15:02:34.692624945 +0000 UTC m=+1005.210333773" watchObservedRunningTime="2025-12-01 15:02:34.708386962 +0000 UTC m=+1005.226095790" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.785798 4637 scope.go:117] "RemoveContainer" containerID="882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.793233 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.793866 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.796761 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371974.058039 podStartE2EDuration="1m2.796736196s" podCreationTimestamp="2025-12-01 15:01:32 +0000 UTC" firstStartedPulling="2025-12-01 15:01:34.421346888 +0000 UTC m=+944.939055716" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:02:34.729776031 +0000 UTC m=+1005.247484859" watchObservedRunningTime="2025-12-01 15:02:34.796736196 +0000 UTC m=+1005.314445024" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.798731 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.799096 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmccg\" (UniqueName: \"kubernetes.io/projected/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-kube-api-access-gmccg\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.806910 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" (UID: "908ecd2f-4feb-4dc1-b8f0-555329bbe3b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.874962 4637 scope.go:117] "RemoveContainer" containerID="cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99" Dec 01 15:02:34 crc kubenswrapper[4637]: E1201 15:02:34.875677 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99\": container with ID starting with cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99 not found: ID does not exist" containerID="cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.875731 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99"} err="failed to get container status \"cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99\": rpc error: code = NotFound desc = could not find container \"cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99\": container with ID starting with cbcbd47efb05d762c1f9065c6eac502c2419c37b1f705de7d9a1fe1ae2397d99 not found: ID does not exist" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.875767 4637 scope.go:117] "RemoveContainer" containerID="f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548" Dec 01 15:02:34 crc kubenswrapper[4637]: E1201 15:02:34.876592 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548\": container with ID starting with f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548 not found: ID does not exist" containerID="f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.876678 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548"} err="failed to get container status \"f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548\": rpc error: code = NotFound desc = could not find container \"f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548\": container with ID starting with f7b5bd34c1abd2074db5872645d6bdbd6c54d97bf335a0273e230ef599472548 not found: ID does not exist" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.876736 4637 scope.go:117] "RemoveContainer" containerID="882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db" Dec 01 15:02:34 crc kubenswrapper[4637]: E1201 15:02:34.877246 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db\": container with ID starting with 882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db not found: ID does not exist" containerID="882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.877290 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db"} err="failed to get container status \"882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db\": rpc error: code = NotFound desc = could not find container \"882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db\": container with ID starting with 882894b4001512e8f23f18538baa79895bbe33d24027dfe8d425532b250780db not found: ID does not exist" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.902077 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.972721 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wm78x"] Dec 01 15:02:34 crc kubenswrapper[4637]: I1201 15:02:34.978822 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wm78x"] Dec 01 15:02:35 crc kubenswrapper[4637]: I1201 15:02:35.677571 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 01 15:02:35 crc kubenswrapper[4637]: I1201 15:02:35.814514 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" path="/var/lib/kubelet/pods/908ecd2f-4feb-4dc1-b8f0-555329bbe3b6/volumes" Dec 01 15:02:35 crc kubenswrapper[4637]: E1201 15:02:35.927685 4637 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.204:44326->38.102.83.204:38409: read tcp 38.102.83.204:44326->38.102.83.204:38409: read: connection reset by peer Dec 01 15:02:36 crc kubenswrapper[4637]: I1201 15:02:36.632447 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:36 crc kubenswrapper[4637]: E1201 15:02:36.632672 4637 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 15:02:36 crc kubenswrapper[4637]: E1201 15:02:36.632882 4637 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 15:02:36 crc kubenswrapper[4637]: E1201 15:02:36.632959 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift podName:b2fd7aa8-b5cb-4c3c-976c-210541a77440 nodeName:}" failed. No retries permitted until 2025-12-01 15:02:44.632925374 +0000 UTC m=+1015.150634202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift") pod "swift-storage-0" (UID: "b2fd7aa8-b5cb-4c3c-976c-210541a77440") : configmap "swift-ring-files" not found Dec 01 15:02:36 crc kubenswrapper[4637]: I1201 15:02:36.677704 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 01 15:02:38 crc kubenswrapper[4637]: I1201 15:02:38.037003 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:02:38 crc kubenswrapper[4637]: I1201 15:02:38.112696 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8m9js"] Dec 01 15:02:38 crc kubenswrapper[4637]: I1201 15:02:38.118210 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-8m9js" podUID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" containerName="dnsmasq-dns" containerID="cri-o://d7699f53793b9c9189fc0260a665bca194e4f005d00ec449c0cc1f10ce4b02f0" gracePeriod=10 Dec 01 15:02:38 crc kubenswrapper[4637]: I1201 15:02:38.661675 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xsqc9" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="registry-server" probeResult="failure" output=< Dec 01 15:02:38 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:02:38 crc kubenswrapper[4637]: > Dec 01 15:02:38 crc kubenswrapper[4637]: I1201 15:02:38.674419 4637 generic.go:334] "Generic (PLEG): container finished" podID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" containerID="d7699f53793b9c9189fc0260a665bca194e4f005d00ec449c0cc1f10ce4b02f0" exitCode=0 Dec 01 15:02:38 crc kubenswrapper[4637]: I1201 15:02:38.674462 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8m9js" event={"ID":"56ed39eb-94a6-49c5-acc8-d30f60c654a9","Type":"ContainerDied","Data":"d7699f53793b9c9189fc0260a665bca194e4f005d00ec449c0cc1f10ce4b02f0"} Dec 01 15:02:38 crc kubenswrapper[4637]: I1201 15:02:38.724512 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.004758 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.081285 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-config\") pod \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.081350 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-dns-svc\") pod \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.081420 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-nb\") pod \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.081469 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-sb\") pod \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.081497 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssb5j\" (UniqueName: \"kubernetes.io/projected/56ed39eb-94a6-49c5-acc8-d30f60c654a9-kube-api-access-ssb5j\") pod \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\" (UID: \"56ed39eb-94a6-49c5-acc8-d30f60c654a9\") " Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.088316 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ed39eb-94a6-49c5-acc8-d30f60c654a9-kube-api-access-ssb5j" (OuterVolumeSpecName: "kube-api-access-ssb5j") pod "56ed39eb-94a6-49c5-acc8-d30f60c654a9" (UID: "56ed39eb-94a6-49c5-acc8-d30f60c654a9"). InnerVolumeSpecName "kube-api-access-ssb5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.129981 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56ed39eb-94a6-49c5-acc8-d30f60c654a9" (UID: "56ed39eb-94a6-49c5-acc8-d30f60c654a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.143439 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56ed39eb-94a6-49c5-acc8-d30f60c654a9" (UID: "56ed39eb-94a6-49c5-acc8-d30f60c654a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.146768 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-config" (OuterVolumeSpecName: "config") pod "56ed39eb-94a6-49c5-acc8-d30f60c654a9" (UID: "56ed39eb-94a6-49c5-acc8-d30f60c654a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.152136 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56ed39eb-94a6-49c5-acc8-d30f60c654a9" (UID: "56ed39eb-94a6-49c5-acc8-d30f60c654a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.183157 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.183197 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.183212 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.183227 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ed39eb-94a6-49c5-acc8-d30f60c654a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.183240 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssb5j\" (UniqueName: \"kubernetes.io/projected/56ed39eb-94a6-49c5-acc8-d30f60c654a9-kube-api-access-ssb5j\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.685817 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8m9js" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.685808 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8m9js" event={"ID":"56ed39eb-94a6-49c5-acc8-d30f60c654a9","Type":"ContainerDied","Data":"247780f1401e1aaeb9893a38774cb872755d7756326cfa1410cd92aac53efb28"} Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.686180 4637 scope.go:117] "RemoveContainer" containerID="d7699f53793b9c9189fc0260a665bca194e4f005d00ec449c0cc1f10ce4b02f0" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.693100 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vnnqf" event={"ID":"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51","Type":"ContainerStarted","Data":"7855faf0970f1d9d1829a834ea89ca9dfe5599ff99d99158d7ac0bf40ba37ebc"} Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.715555 4637 scope.go:117] "RemoveContainer" containerID="a8ff9ca46caa1217a93b1fa62d864a1cb8b63a05f94a2aae60698ed0e516aba5" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.719845 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vnnqf" podStartSLOduration=2.560392213 podStartE2EDuration="7.719823986s" podCreationTimestamp="2025-12-01 15:02:32 +0000 UTC" firstStartedPulling="2025-12-01 15:02:33.569538081 +0000 UTC m=+1004.087246909" lastFinishedPulling="2025-12-01 15:02:38.728969854 +0000 UTC m=+1009.246678682" observedRunningTime="2025-12-01 15:02:39.715992762 +0000 UTC m=+1010.233701590" watchObservedRunningTime="2025-12-01 15:02:39.719823986 +0000 UTC m=+1010.237532814" Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.736463 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8m9js"] Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.755882 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8m9js"] Dec 01 15:02:39 crc kubenswrapper[4637]: I1201 15:02:39.782132 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" path="/var/lib/kubelet/pods/56ed39eb-94a6-49c5-acc8-d30f60c654a9/volumes" Dec 01 15:02:40 crc kubenswrapper[4637]: I1201 15:02:40.257671 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:40 crc kubenswrapper[4637]: I1201 15:02:40.385411 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:40 crc kubenswrapper[4637]: I1201 15:02:40.503393 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cjd5c"] Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.713391 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cjd5c" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="registry-server" containerID="cri-o://631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956" gracePeriod=2 Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.714583 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.961300 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 01 15:02:41 crc kubenswrapper[4637]: E1201 15:02:41.961702 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerName="registry-server" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.961719 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerName="registry-server" Dec 01 15:02:41 crc kubenswrapper[4637]: E1201 15:02:41.961743 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" containerName="dnsmasq-dns" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.961751 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" containerName="dnsmasq-dns" Dec 01 15:02:41 crc kubenswrapper[4637]: E1201 15:02:41.961767 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerName="extract-utilities" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.961776 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerName="extract-utilities" Dec 01 15:02:41 crc kubenswrapper[4637]: E1201 15:02:41.961793 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerName="extract-content" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.961800 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerName="extract-content" Dec 01 15:02:41 crc kubenswrapper[4637]: E1201 15:02:41.961821 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" containerName="init" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.961828 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" containerName="init" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.962075 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="908ecd2f-4feb-4dc1-b8f0-555329bbe3b6" containerName="registry-server" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.962111 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ed39eb-94a6-49c5-acc8-d30f60c654a9" containerName="dnsmasq-dns" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.963244 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.967061 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.967216 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.967421 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kvdzw" Dec 01 15:02:41 crc kubenswrapper[4637]: I1201 15:02:41.980984 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.001535 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.045030 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5348dcbd-104a-4fff-9414-bb859f58fd52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.045161 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4t4\" (UniqueName: \"kubernetes.io/projected/5348dcbd-104a-4fff-9414-bb859f58fd52-kube-api-access-sn4t4\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.045184 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5348dcbd-104a-4fff-9414-bb859f58fd52-config\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.045236 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.050205 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.050289 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5348dcbd-104a-4fff-9414-bb859f58fd52-scripts\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.050314 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.151643 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.151692 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5348dcbd-104a-4fff-9414-bb859f58fd52-scripts\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.151717 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.151758 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5348dcbd-104a-4fff-9414-bb859f58fd52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.151801 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn4t4\" (UniqueName: \"kubernetes.io/projected/5348dcbd-104a-4fff-9414-bb859f58fd52-kube-api-access-sn4t4\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.151821 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5348dcbd-104a-4fff-9414-bb859f58fd52-config\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.151854 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.152601 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5348dcbd-104a-4fff-9414-bb859f58fd52-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.153475 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5348dcbd-104a-4fff-9414-bb859f58fd52-config\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.155119 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5348dcbd-104a-4fff-9414-bb859f58fd52-scripts\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.161569 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.164583 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.176146 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn4t4\" (UniqueName: \"kubernetes.io/projected/5348dcbd-104a-4fff-9414-bb859f58fd52-kube-api-access-sn4t4\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.180809 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5348dcbd-104a-4fff-9414-bb859f58fd52-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5348dcbd-104a-4fff-9414-bb859f58fd52\") " pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.247550 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.301108 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.355603 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-utilities\") pod \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.355698 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-catalog-content\") pod \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.355743 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8xbq\" (UniqueName: \"kubernetes.io/projected/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-kube-api-access-t8xbq\") pod \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\" (UID: \"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3\") " Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.357400 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-utilities" (OuterVolumeSpecName: "utilities") pod "bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" (UID: "bb20e93f-4cda-4b8f-a1d8-7eaa697673b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.358211 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.361828 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-kube-api-access-t8xbq" (OuterVolumeSpecName: "kube-api-access-t8xbq") pod "bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" (UID: "bb20e93f-4cda-4b8f-a1d8-7eaa697673b3"). InnerVolumeSpecName "kube-api-access-t8xbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.388880 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" (UID: "bb20e93f-4cda-4b8f-a1d8-7eaa697673b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.459914 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.460250 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8xbq\" (UniqueName: \"kubernetes.io/projected/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3-kube-api-access-t8xbq\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.724130 4637 generic.go:334] "Generic (PLEG): container finished" podID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerID="631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956" exitCode=0 Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.724176 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjd5c" event={"ID":"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3","Type":"ContainerDied","Data":"631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956"} Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.724203 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cjd5c" event={"ID":"bb20e93f-4cda-4b8f-a1d8-7eaa697673b3","Type":"ContainerDied","Data":"35f1706e32e9ce2f309b99a8bf779def9b06e590ce358c609c3467dc25a04f02"} Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.724207 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cjd5c" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.724222 4637 scope.go:117] "RemoveContainer" containerID="631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.753361 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cjd5c"] Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.760154 4637 scope.go:117] "RemoveContainer" containerID="44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.763705 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cjd5c"] Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.792621 4637 scope.go:117] "RemoveContainer" containerID="097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.822652 4637 scope.go:117] "RemoveContainer" containerID="631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956" Dec 01 15:02:42 crc kubenswrapper[4637]: E1201 15:02:42.823240 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956\": container with ID starting with 631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956 not found: ID does not exist" containerID="631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.823272 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956"} err="failed to get container status \"631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956\": rpc error: code = NotFound desc = could not find container \"631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956\": container with ID starting with 631ff41be70f5e648f7f5e4b8e7edb4c64b06d40f827b1e96f9d4465014cb956 not found: ID does not exist" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.823294 4637 scope.go:117] "RemoveContainer" containerID="44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e" Dec 01 15:02:42 crc kubenswrapper[4637]: E1201 15:02:42.823549 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e\": container with ID starting with 44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e not found: ID does not exist" containerID="44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.823570 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e"} err="failed to get container status \"44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e\": rpc error: code = NotFound desc = could not find container \"44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e\": container with ID starting with 44fc5881c71765b6dbad6580789bef885475d770d9cd9eef0461ea0618d3877e not found: ID does not exist" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.823600 4637 scope.go:117] "RemoveContainer" containerID="097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6" Dec 01 15:02:42 crc kubenswrapper[4637]: E1201 15:02:42.823813 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6\": container with ID starting with 097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6 not found: ID does not exist" containerID="097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.823835 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6"} err="failed to get container status \"097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6\": rpc error: code = NotFound desc = could not find container \"097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6\": container with ID starting with 097a101d4095bb542dc399d12aaa40c957752aa6667f83b497d076bcf39c5ec6 not found: ID does not exist" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.840507 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 01 15:02:42 crc kubenswrapper[4637]: W1201 15:02:42.846103 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5348dcbd_104a_4fff_9414_bb859f58fd52.slice/crio-e41499284bce39040dc42b6f56ddadfbb32586062cca7246115f066f14cd68cc WatchSource:0}: Error finding container e41499284bce39040dc42b6f56ddadfbb32586062cca7246115f066f14cd68cc: Status 404 returned error can't find the container with id e41499284bce39040dc42b6f56ddadfbb32586062cca7246115f066f14cd68cc Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.916254 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 01 15:02:42 crc kubenswrapper[4637]: I1201 15:02:42.974216 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 01 15:02:43 crc kubenswrapper[4637]: I1201 15:02:43.516222 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 01 15:02:43 crc kubenswrapper[4637]: I1201 15:02:43.516270 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 01 15:02:43 crc kubenswrapper[4637]: I1201 15:02:43.568456 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 01 15:02:43 crc kubenswrapper[4637]: I1201 15:02:43.735669 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5348dcbd-104a-4fff-9414-bb859f58fd52","Type":"ContainerStarted","Data":"e41499284bce39040dc42b6f56ddadfbb32586062cca7246115f066f14cd68cc"} Dec 01 15:02:43 crc kubenswrapper[4637]: I1201 15:02:43.781415 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" path="/var/lib/kubelet/pods/bb20e93f-4cda-4b8f-a1d8-7eaa697673b3/volumes" Dec 01 15:02:43 crc kubenswrapper[4637]: I1201 15:02:43.812751 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.674132 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4r967"] Dec 01 15:02:44 crc kubenswrapper[4637]: E1201 15:02:44.674459 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="extract-content" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.674475 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="extract-content" Dec 01 15:02:44 crc kubenswrapper[4637]: E1201 15:02:44.674486 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="registry-server" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.674492 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="registry-server" Dec 01 15:02:44 crc kubenswrapper[4637]: E1201 15:02:44.674509 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="extract-utilities" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.674517 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="extract-utilities" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.674671 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb20e93f-4cda-4b8f-a1d8-7eaa697673b3" containerName="registry-server" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.675212 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4r967" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.698608 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4r967"] Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.699902 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fbhn\" (UniqueName: \"kubernetes.io/projected/5a4b0da8-5193-4f28-944c-15c3d1f547d6-kube-api-access-7fbhn\") pod \"keystone-db-create-4r967\" (UID: \"5a4b0da8-5193-4f28-944c-15c3d1f547d6\") " pod="openstack/keystone-db-create-4r967" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.700039 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:02:44 crc kubenswrapper[4637]: E1201 15:02:44.700185 4637 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 15:02:44 crc kubenswrapper[4637]: E1201 15:02:44.700203 4637 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 15:02:44 crc kubenswrapper[4637]: E1201 15:02:44.700252 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift podName:b2fd7aa8-b5cb-4c3c-976c-210541a77440 nodeName:}" failed. No retries permitted until 2025-12-01 15:03:00.700236669 +0000 UTC m=+1031.217945497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift") pod "swift-storage-0" (UID: "b2fd7aa8-b5cb-4c3c-976c-210541a77440") : configmap "swift-ring-files" not found Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.803776 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fbhn\" (UniqueName: \"kubernetes.io/projected/5a4b0da8-5193-4f28-944c-15c3d1f547d6-kube-api-access-7fbhn\") pod \"keystone-db-create-4r967\" (UID: \"5a4b0da8-5193-4f28-944c-15c3d1f547d6\") " pod="openstack/keystone-db-create-4r967" Dec 01 15:02:44 crc kubenswrapper[4637]: I1201 15:02:44.828521 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fbhn\" (UniqueName: \"kubernetes.io/projected/5a4b0da8-5193-4f28-944c-15c3d1f547d6-kube-api-access-7fbhn\") pod \"keystone-db-create-4r967\" (UID: \"5a4b0da8-5193-4f28-944c-15c3d1f547d6\") " pod="openstack/keystone-db-create-4r967" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.034367 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4r967" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.202876 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-f2htt"] Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.204223 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f2htt" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.211994 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sm56\" (UniqueName: \"kubernetes.io/projected/97be1ae1-cfd9-421b-a02a-ea2f8d1be388-kube-api-access-8sm56\") pod \"placement-db-create-f2htt\" (UID: \"97be1ae1-cfd9-421b-a02a-ea2f8d1be388\") " pod="openstack/placement-db-create-f2htt" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.240350 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-f2htt"] Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.314404 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sm56\" (UniqueName: \"kubernetes.io/projected/97be1ae1-cfd9-421b-a02a-ea2f8d1be388-kube-api-access-8sm56\") pod \"placement-db-create-f2htt\" (UID: \"97be1ae1-cfd9-421b-a02a-ea2f8d1be388\") " pod="openstack/placement-db-create-f2htt" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.346210 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sm56\" (UniqueName: \"kubernetes.io/projected/97be1ae1-cfd9-421b-a02a-ea2f8d1be388-kube-api-access-8sm56\") pod \"placement-db-create-f2htt\" (UID: \"97be1ae1-cfd9-421b-a02a-ea2f8d1be388\") " pod="openstack/placement-db-create-f2htt" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.346709 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-v7784"] Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.350119 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-v7784" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.358359 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-v7784"] Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.416332 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfwv7\" (UniqueName: \"kubernetes.io/projected/9977d860-baa6-4a79-83ac-95e324302046-kube-api-access-qfwv7\") pod \"glance-db-create-v7784\" (UID: \"9977d860-baa6-4a79-83ac-95e324302046\") " pod="openstack/glance-db-create-v7784" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.518594 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfwv7\" (UniqueName: \"kubernetes.io/projected/9977d860-baa6-4a79-83ac-95e324302046-kube-api-access-qfwv7\") pod \"glance-db-create-v7784\" (UID: \"9977d860-baa6-4a79-83ac-95e324302046\") " pod="openstack/glance-db-create-v7784" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.526278 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f2htt" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.540650 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfwv7\" (UniqueName: \"kubernetes.io/projected/9977d860-baa6-4a79-83ac-95e324302046-kube-api-access-qfwv7\") pod \"glance-db-create-v7784\" (UID: \"9977d860-baa6-4a79-83ac-95e324302046\") " pod="openstack/glance-db-create-v7784" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.603059 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4r967"] Dec 01 15:02:45 crc kubenswrapper[4637]: W1201 15:02:45.617269 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a4b0da8_5193_4f28_944c_15c3d1f547d6.slice/crio-9e8ed5c000f3ad123d44f98651e4574254cecad241a4f7ff60431181cee9a1c1 WatchSource:0}: Error finding container 9e8ed5c000f3ad123d44f98651e4574254cecad241a4f7ff60431181cee9a1c1: Status 404 returned error can't find the container with id 9e8ed5c000f3ad123d44f98651e4574254cecad241a4f7ff60431181cee9a1c1 Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.678390 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-v7784" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.814253 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5348dcbd-104a-4fff-9414-bb859f58fd52","Type":"ContainerStarted","Data":"bae5b2ad52c0682b6495aec820d7786bb287c030d0259badd37b16dbefe649c3"} Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.814288 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5348dcbd-104a-4fff-9414-bb859f58fd52","Type":"ContainerStarted","Data":"81bd19adef4b6a9310655022ac865bd284224bec9f57264e5d5e3151c7ce7034"} Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.815568 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.834439 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4r967" event={"ID":"5a4b0da8-5193-4f28-944c-15c3d1f547d6","Type":"ContainerStarted","Data":"9e8ed5c000f3ad123d44f98651e4574254cecad241a4f7ff60431181cee9a1c1"} Dec 01 15:02:45 crc kubenswrapper[4637]: I1201 15:02:45.849464 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.946133115 podStartE2EDuration="4.849447202s" podCreationTimestamp="2025-12-01 15:02:41 +0000 UTC" firstStartedPulling="2025-12-01 15:02:42.849859157 +0000 UTC m=+1013.367567985" lastFinishedPulling="2025-12-01 15:02:44.753173244 +0000 UTC m=+1015.270882072" observedRunningTime="2025-12-01 15:02:45.845360711 +0000 UTC m=+1016.363069539" watchObservedRunningTime="2025-12-01 15:02:45.849447202 +0000 UTC m=+1016.367156030" Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.201544 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-f2htt"] Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.265512 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-v7784"] Dec 01 15:02:46 crc kubenswrapper[4637]: W1201 15:02:46.285260 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9977d860_baa6_4a79_83ac_95e324302046.slice/crio-0008789c3561074c64fa30edb02e48769113313486933771e9ef88a8bddbbf9b WatchSource:0}: Error finding container 0008789c3561074c64fa30edb02e48769113313486933771e9ef88a8bddbbf9b: Status 404 returned error can't find the container with id 0008789c3561074c64fa30edb02e48769113313486933771e9ef88a8bddbbf9b Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.856897 4637 generic.go:334] "Generic (PLEG): container finished" podID="5a4b0da8-5193-4f28-944c-15c3d1f547d6" containerID="aead16d51cd14f842b809d9d33b95a803a1fbf2fbf9841dfdde1c27d98e1a7f0" exitCode=0 Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.856973 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4r967" event={"ID":"5a4b0da8-5193-4f28-944c-15c3d1f547d6","Type":"ContainerDied","Data":"aead16d51cd14f842b809d9d33b95a803a1fbf2fbf9841dfdde1c27d98e1a7f0"} Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.862056 4637 generic.go:334] "Generic (PLEG): container finished" podID="97be1ae1-cfd9-421b-a02a-ea2f8d1be388" containerID="da463577d8b340b6b7256bb9068525a0c3a4b1008bfaf946ff5934c9dff416b6" exitCode=0 Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.862131 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f2htt" event={"ID":"97be1ae1-cfd9-421b-a02a-ea2f8d1be388","Type":"ContainerDied","Data":"da463577d8b340b6b7256bb9068525a0c3a4b1008bfaf946ff5934c9dff416b6"} Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.862407 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f2htt" event={"ID":"97be1ae1-cfd9-421b-a02a-ea2f8d1be388","Type":"ContainerStarted","Data":"f8c9acf2a889ded6a518891152bb987f7280980c76e35d673644c1ce42ac9876"} Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.864206 4637 generic.go:334] "Generic (PLEG): container finished" podID="9977d860-baa6-4a79-83ac-95e324302046" containerID="86ded1e2eb9eb3f45f32fe2db9695154b099b18a8c9e6697aa614a7d9aaf5f21" exitCode=0 Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.864344 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-v7784" event={"ID":"9977d860-baa6-4a79-83ac-95e324302046","Type":"ContainerDied","Data":"86ded1e2eb9eb3f45f32fe2db9695154b099b18a8c9e6697aa614a7d9aaf5f21"} Dec 01 15:02:46 crc kubenswrapper[4637]: I1201 15:02:46.864410 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-v7784" event={"ID":"9977d860-baa6-4a79-83ac-95e324302046","Type":"ContainerStarted","Data":"0008789c3561074c64fa30edb02e48769113313486933771e9ef88a8bddbbf9b"} Dec 01 15:02:47 crc kubenswrapper[4637]: I1201 15:02:47.632037 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:02:47 crc kubenswrapper[4637]: I1201 15:02:47.689411 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:02:47 crc kubenswrapper[4637]: I1201 15:02:47.872375 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsqc9"] Dec 01 15:02:47 crc kubenswrapper[4637]: I1201 15:02:47.874656 4637 generic.go:334] "Generic (PLEG): container finished" podID="911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" containerID="7855faf0970f1d9d1829a834ea89ca9dfe5599ff99d99158d7ac0bf40ba37ebc" exitCode=0 Dec 01 15:02:47 crc kubenswrapper[4637]: I1201 15:02:47.874691 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vnnqf" event={"ID":"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51","Type":"ContainerDied","Data":"7855faf0970f1d9d1829a834ea89ca9dfe5599ff99d99158d7ac0bf40ba37ebc"} Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.302676 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4r967" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.376086 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fbhn\" (UniqueName: \"kubernetes.io/projected/5a4b0da8-5193-4f28-944c-15c3d1f547d6-kube-api-access-7fbhn\") pod \"5a4b0da8-5193-4f28-944c-15c3d1f547d6\" (UID: \"5a4b0da8-5193-4f28-944c-15c3d1f547d6\") " Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.382095 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4b0da8-5193-4f28-944c-15c3d1f547d6-kube-api-access-7fbhn" (OuterVolumeSpecName: "kube-api-access-7fbhn") pod "5a4b0da8-5193-4f28-944c-15c3d1f547d6" (UID: "5a4b0da8-5193-4f28-944c-15c3d1f547d6"). InnerVolumeSpecName "kube-api-access-7fbhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.436083 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f2htt" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.444078 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-v7784" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.479018 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fbhn\" (UniqueName: \"kubernetes.io/projected/5a4b0da8-5193-4f28-944c-15c3d1f547d6-kube-api-access-7fbhn\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.580144 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfwv7\" (UniqueName: \"kubernetes.io/projected/9977d860-baa6-4a79-83ac-95e324302046-kube-api-access-qfwv7\") pod \"9977d860-baa6-4a79-83ac-95e324302046\" (UID: \"9977d860-baa6-4a79-83ac-95e324302046\") " Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.580240 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sm56\" (UniqueName: \"kubernetes.io/projected/97be1ae1-cfd9-421b-a02a-ea2f8d1be388-kube-api-access-8sm56\") pod \"97be1ae1-cfd9-421b-a02a-ea2f8d1be388\" (UID: \"97be1ae1-cfd9-421b-a02a-ea2f8d1be388\") " Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.583179 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97be1ae1-cfd9-421b-a02a-ea2f8d1be388-kube-api-access-8sm56" (OuterVolumeSpecName: "kube-api-access-8sm56") pod "97be1ae1-cfd9-421b-a02a-ea2f8d1be388" (UID: "97be1ae1-cfd9-421b-a02a-ea2f8d1be388"). InnerVolumeSpecName "kube-api-access-8sm56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.583554 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9977d860-baa6-4a79-83ac-95e324302046-kube-api-access-qfwv7" (OuterVolumeSpecName: "kube-api-access-qfwv7") pod "9977d860-baa6-4a79-83ac-95e324302046" (UID: "9977d860-baa6-4a79-83ac-95e324302046"). InnerVolumeSpecName "kube-api-access-qfwv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.681481 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sm56\" (UniqueName: \"kubernetes.io/projected/97be1ae1-cfd9-421b-a02a-ea2f8d1be388-kube-api-access-8sm56\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.681516 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfwv7\" (UniqueName: \"kubernetes.io/projected/9977d860-baa6-4a79-83ac-95e324302046-kube-api-access-qfwv7\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.884434 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-f2htt" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.884431 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-f2htt" event={"ID":"97be1ae1-cfd9-421b-a02a-ea2f8d1be388","Type":"ContainerDied","Data":"f8c9acf2a889ded6a518891152bb987f7280980c76e35d673644c1ce42ac9876"} Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.884648 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8c9acf2a889ded6a518891152bb987f7280980c76e35d673644c1ce42ac9876" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.886535 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-v7784" event={"ID":"9977d860-baa6-4a79-83ac-95e324302046","Type":"ContainerDied","Data":"0008789c3561074c64fa30edb02e48769113313486933771e9ef88a8bddbbf9b"} Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.886566 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0008789c3561074c64fa30edb02e48769113313486933771e9ef88a8bddbbf9b" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.886609 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-v7784" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.887841 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xsqc9" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="registry-server" containerID="cri-o://2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b" gracePeriod=2 Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.888207 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4r967" event={"ID":"5a4b0da8-5193-4f28-944c-15c3d1f547d6","Type":"ContainerDied","Data":"9e8ed5c000f3ad123d44f98651e4574254cecad241a4f7ff60431181cee9a1c1"} Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.888223 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8ed5c000f3ad123d44f98651e4574254cecad241a4f7ff60431181cee9a1c1" Dec 01 15:02:48 crc kubenswrapper[4637]: I1201 15:02:48.888301 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4r967" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.366459 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.374601 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404537 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9zn4\" (UniqueName: \"kubernetes.io/projected/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-kube-api-access-m9zn4\") pod \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404576 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-combined-ca-bundle\") pod \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404658 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-swiftconf\") pod \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404740 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-scripts\") pod \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404770 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-catalog-content\") pod \"30f33978-c393-4faf-99c0-b6ce509f1d3f\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404794 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-utilities\") pod \"30f33978-c393-4faf-99c0-b6ce509f1d3f\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404841 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-ring-data-devices\") pod \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404885 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-etc-swift\") pod \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404920 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-dispersionconf\") pod \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\" (UID: \"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.404953 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4xvw\" (UniqueName: \"kubernetes.io/projected/30f33978-c393-4faf-99c0-b6ce509f1d3f-kube-api-access-n4xvw\") pod \"30f33978-c393-4faf-99c0-b6ce509f1d3f\" (UID: \"30f33978-c393-4faf-99c0-b6ce509f1d3f\") " Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.406758 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-utilities" (OuterVolumeSpecName: "utilities") pod "30f33978-c393-4faf-99c0-b6ce509f1d3f" (UID: "30f33978-c393-4faf-99c0-b6ce509f1d3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.407000 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" (UID: "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.407633 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" (UID: "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.422651 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-kube-api-access-m9zn4" (OuterVolumeSpecName: "kube-api-access-m9zn4") pod "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" (UID: "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51"). InnerVolumeSpecName "kube-api-access-m9zn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.431807 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f33978-c393-4faf-99c0-b6ce509f1d3f-kube-api-access-n4xvw" (OuterVolumeSpecName: "kube-api-access-n4xvw") pod "30f33978-c393-4faf-99c0-b6ce509f1d3f" (UID: "30f33978-c393-4faf-99c0-b6ce509f1d3f"). InnerVolumeSpecName "kube-api-access-n4xvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.441123 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" (UID: "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.453810 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" (UID: "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.463446 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" (UID: "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.477414 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-scripts" (OuterVolumeSpecName: "scripts") pod "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" (UID: "911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507387 4637 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507426 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507443 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507456 4637 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507468 4637 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507482 4637 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507494 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4xvw\" (UniqueName: \"kubernetes.io/projected/30f33978-c393-4faf-99c0-b6ce509f1d3f-kube-api-access-n4xvw\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507507 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9zn4\" (UniqueName: \"kubernetes.io/projected/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-kube-api-access-m9zn4\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.507519 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.551588 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30f33978-c393-4faf-99c0-b6ce509f1d3f" (UID: "30f33978-c393-4faf-99c0-b6ce509f1d3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.609504 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30f33978-c393-4faf-99c0-b6ce509f1d3f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.898606 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsqc9" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.898649 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsqc9" event={"ID":"30f33978-c393-4faf-99c0-b6ce509f1d3f","Type":"ContainerDied","Data":"2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b"} Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.898641 4637 generic.go:334] "Generic (PLEG): container finished" podID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerID="2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b" exitCode=0 Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.898685 4637 scope.go:117] "RemoveContainer" containerID="2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.898740 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsqc9" event={"ID":"30f33978-c393-4faf-99c0-b6ce509f1d3f","Type":"ContainerDied","Data":"2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10"} Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.903048 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vnnqf" event={"ID":"911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51","Type":"ContainerDied","Data":"5487e7b031e7891d55b24f1fa5b21220a48d480312d81398feecec944592e0fb"} Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.903093 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5487e7b031e7891d55b24f1fa5b21220a48d480312d81398feecec944592e0fb" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.903160 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vnnqf" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.927191 4637 scope.go:117] "RemoveContainer" containerID="6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.935603 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsqc9"] Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.949151 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xsqc9"] Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.954561 4637 scope.go:117] "RemoveContainer" containerID="c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.983956 4637 scope.go:117] "RemoveContainer" containerID="2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b" Dec 01 15:02:49 crc kubenswrapper[4637]: E1201 15:02:49.984594 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b\": container with ID starting with 2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b not found: ID does not exist" containerID="2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.984650 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b"} err="failed to get container status \"2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b\": rpc error: code = NotFound desc = could not find container \"2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b\": container with ID starting with 2d3a35b6a9b1c07c864ab74bff216e665c525e9e0192aa4b26da9920ec2b3f4b not found: ID does not exist" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.984685 4637 scope.go:117] "RemoveContainer" containerID="6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5" Dec 01 15:02:49 crc kubenswrapper[4637]: E1201 15:02:49.985096 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5\": container with ID starting with 6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5 not found: ID does not exist" containerID="6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.985242 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5"} err="failed to get container status \"6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5\": rpc error: code = NotFound desc = could not find container \"6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5\": container with ID starting with 6436bc6f8d5adb69404ed817c8bf989aab621daa0fdc9477fe6e1776979fc5a5 not found: ID does not exist" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.985349 4637 scope.go:117] "RemoveContainer" containerID="c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e" Dec 01 15:02:49 crc kubenswrapper[4637]: E1201 15:02:49.985777 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e\": container with ID starting with c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e not found: ID does not exist" containerID="c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e" Dec 01 15:02:49 crc kubenswrapper[4637]: I1201 15:02:49.985907 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e"} err="failed to get container status \"c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e\": rpc error: code = NotFound desc = could not find container \"c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e\": container with ID starting with c8f2adafb7bf88fdd327dba68d6543c08ee19079dbab3ec513577a60eb33670e not found: ID does not exist" Dec 01 15:02:51 crc kubenswrapper[4637]: I1201 15:02:51.783859 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" path="/var/lib/kubelet/pods/30f33978-c393-4faf-99c0-b6ce509f1d3f/volumes" Dec 01 15:02:51 crc kubenswrapper[4637]: I1201 15:02:51.859293 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:02:51 crc kubenswrapper[4637]: I1201 15:02:51.860534 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9pxbh" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.090895 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dhqng-config-zw2jb"] Dec 01 15:02:52 crc kubenswrapper[4637]: E1201 15:02:52.091865 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="registry-server" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.092118 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="registry-server" Dec 01 15:02:52 crc kubenswrapper[4637]: E1201 15:02:52.092209 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9977d860-baa6-4a79-83ac-95e324302046" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.092294 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9977d860-baa6-4a79-83ac-95e324302046" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: E1201 15:02:52.092384 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="extract-utilities" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.092461 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="extract-utilities" Dec 01 15:02:52 crc kubenswrapper[4637]: E1201 15:02:52.092544 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4b0da8-5193-4f28-944c-15c3d1f547d6" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.092616 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4b0da8-5193-4f28-944c-15c3d1f547d6" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: E1201 15:02:52.092698 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" containerName="swift-ring-rebalance" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.092752 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" containerName="swift-ring-rebalance" Dec 01 15:02:52 crc kubenswrapper[4637]: E1201 15:02:52.092887 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="extract-content" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.093011 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="extract-content" Dec 01 15:02:52 crc kubenswrapper[4637]: E1201 15:02:52.093107 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97be1ae1-cfd9-421b-a02a-ea2f8d1be388" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.093193 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="97be1ae1-cfd9-421b-a02a-ea2f8d1be388" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.093631 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4b0da8-5193-4f28-944c-15c3d1f547d6" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.093728 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="97be1ae1-cfd9-421b-a02a-ea2f8d1be388" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.093822 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f33978-c393-4faf-99c0-b6ce509f1d3f" containerName="registry-server" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.094967 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="9977d860-baa6-4a79-83ac-95e324302046" containerName="mariadb-database-create" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.097039 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51" containerName="swift-ring-rebalance" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.098202 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.105219 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.111370 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dhqng-config-zw2jb"] Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.152164 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run-ovn\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.152277 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899dh\" (UniqueName: \"kubernetes.io/projected/d3c798a8-7d64-4298-a2a5-708cc5415258-kube-api-access-899dh\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.152344 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.152413 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-log-ovn\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.152447 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-additional-scripts\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.152469 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-scripts\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.253690 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.253778 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-log-ovn\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.254027 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-scripts\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.254048 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-additional-scripts\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.254099 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run-ovn\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.254158 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-899dh\" (UniqueName: \"kubernetes.io/projected/d3c798a8-7d64-4298-a2a5-708cc5415258-kube-api-access-899dh\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.254296 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.254296 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-log-ovn\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.254385 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run-ovn\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.255159 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-additional-scripts\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.257222 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-scripts\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.275953 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-899dh\" (UniqueName: \"kubernetes.io/projected/d3c798a8-7d64-4298-a2a5-708cc5415258-kube-api-access-899dh\") pod \"ovn-controller-dhqng-config-zw2jb\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.445270 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.882233 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dhqng-config-zw2jb"] Dec 01 15:02:52 crc kubenswrapper[4637]: I1201 15:02:52.948432 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dhqng-config-zw2jb" event={"ID":"d3c798a8-7d64-4298-a2a5-708cc5415258","Type":"ContainerStarted","Data":"a17ef34a68340a36be68c7424407a8ac374241cfd2d0af6f463066468f954535"} Dec 01 15:02:53 crc kubenswrapper[4637]: I1201 15:02:53.964274 4637 generic.go:334] "Generic (PLEG): container finished" podID="d3c798a8-7d64-4298-a2a5-708cc5415258" containerID="20107288aa1c7ee4b1420f18544a41c6bcd7cedfbbe2c33eb0599adb88be0591" exitCode=0 Dec 01 15:02:53 crc kubenswrapper[4637]: I1201 15:02:53.964401 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dhqng-config-zw2jb" event={"ID":"d3c798a8-7d64-4298-a2a5-708cc5415258","Type":"ContainerDied","Data":"20107288aa1c7ee4b1420f18544a41c6bcd7cedfbbe2c33eb0599adb88be0591"} Dec 01 15:02:54 crc kubenswrapper[4637]: I1201 15:02:54.776100 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8b91-account-create-7ngwt"] Dec 01 15:02:54 crc kubenswrapper[4637]: I1201 15:02:54.777089 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b91-account-create-7ngwt" Dec 01 15:02:54 crc kubenswrapper[4637]: I1201 15:02:54.780288 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 01 15:02:54 crc kubenswrapper[4637]: I1201 15:02:54.831023 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8b91-account-create-7ngwt"] Dec 01 15:02:54 crc kubenswrapper[4637]: I1201 15:02:54.925038 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8sn\" (UniqueName: \"kubernetes.io/projected/510d80d9-6c2b-4128-bbb7-03c03e1b68dc-kube-api-access-mg8sn\") pod \"keystone-8b91-account-create-7ngwt\" (UID: \"510d80d9-6c2b-4128-bbb7-03c03e1b68dc\") " pod="openstack/keystone-8b91-account-create-7ngwt" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.027749 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg8sn\" (UniqueName: \"kubernetes.io/projected/510d80d9-6c2b-4128-bbb7-03c03e1b68dc-kube-api-access-mg8sn\") pod \"keystone-8b91-account-create-7ngwt\" (UID: \"510d80d9-6c2b-4128-bbb7-03c03e1b68dc\") " pod="openstack/keystone-8b91-account-create-7ngwt" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.055686 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg8sn\" (UniqueName: \"kubernetes.io/projected/510d80d9-6c2b-4128-bbb7-03c03e1b68dc-kube-api-access-mg8sn\") pod \"keystone-8b91-account-create-7ngwt\" (UID: \"510d80d9-6c2b-4128-bbb7-03c03e1b68dc\") " pod="openstack/keystone-8b91-account-create-7ngwt" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.140046 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b91-account-create-7ngwt" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.224827 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0d53-account-create-fphvj"] Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.225876 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0d53-account-create-fphvj" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.229134 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.231129 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b6vr\" (UniqueName: \"kubernetes.io/projected/d60b5eb9-075c-48f5-87b1-9bb3b870f589-kube-api-access-9b6vr\") pod \"placement-0d53-account-create-fphvj\" (UID: \"d60b5eb9-075c-48f5-87b1-9bb3b870f589\") " pod="openstack/placement-0d53-account-create-fphvj" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.243567 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0d53-account-create-fphvj"] Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.325037 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.333383 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b6vr\" (UniqueName: \"kubernetes.io/projected/d60b5eb9-075c-48f5-87b1-9bb3b870f589-kube-api-access-9b6vr\") pod \"placement-0d53-account-create-fphvj\" (UID: \"d60b5eb9-075c-48f5-87b1-9bb3b870f589\") " pod="openstack/placement-0d53-account-create-fphvj" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.360569 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b6vr\" (UniqueName: \"kubernetes.io/projected/d60b5eb9-075c-48f5-87b1-9bb3b870f589-kube-api-access-9b6vr\") pod \"placement-0d53-account-create-fphvj\" (UID: \"d60b5eb9-075c-48f5-87b1-9bb3b870f589\") " pod="openstack/placement-0d53-account-create-fphvj" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.434792 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-additional-scripts\") pod \"d3c798a8-7d64-4298-a2a5-708cc5415258\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.435208 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run\") pod \"d3c798a8-7d64-4298-a2a5-708cc5415258\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.435307 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-scripts\") pod \"d3c798a8-7d64-4298-a2a5-708cc5415258\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.435345 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-899dh\" (UniqueName: \"kubernetes.io/projected/d3c798a8-7d64-4298-a2a5-708cc5415258-kube-api-access-899dh\") pod \"d3c798a8-7d64-4298-a2a5-708cc5415258\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.435358 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run" (OuterVolumeSpecName: "var-run") pod "d3c798a8-7d64-4298-a2a5-708cc5415258" (UID: "d3c798a8-7d64-4298-a2a5-708cc5415258"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.435382 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run-ovn\") pod \"d3c798a8-7d64-4298-a2a5-708cc5415258\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.435426 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d3c798a8-7d64-4298-a2a5-708cc5415258" (UID: "d3c798a8-7d64-4298-a2a5-708cc5415258"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.435498 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-log-ovn\") pod \"d3c798a8-7d64-4298-a2a5-708cc5415258\" (UID: \"d3c798a8-7d64-4298-a2a5-708cc5415258\") " Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.435540 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d3c798a8-7d64-4298-a2a5-708cc5415258" (UID: "d3c798a8-7d64-4298-a2a5-708cc5415258"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.436015 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d3c798a8-7d64-4298-a2a5-708cc5415258" (UID: "d3c798a8-7d64-4298-a2a5-708cc5415258"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.436287 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-scripts" (OuterVolumeSpecName: "scripts") pod "d3c798a8-7d64-4298-a2a5-708cc5415258" (UID: "d3c798a8-7d64-4298-a2a5-708cc5415258"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.436300 4637 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.436315 4637 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.436329 4637 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.436338 4637 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3c798a8-7d64-4298-a2a5-708cc5415258-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.438143 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c798a8-7d64-4298-a2a5-708cc5415258-kube-api-access-899dh" (OuterVolumeSpecName: "kube-api-access-899dh") pod "d3c798a8-7d64-4298-a2a5-708cc5415258" (UID: "d3c798a8-7d64-4298-a2a5-708cc5415258"). InnerVolumeSpecName "kube-api-access-899dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.537837 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3c798a8-7d64-4298-a2a5-708cc5415258-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.537871 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-899dh\" (UniqueName: \"kubernetes.io/projected/d3c798a8-7d64-4298-a2a5-708cc5415258-kube-api-access-899dh\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.605563 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-569f-account-create-5sxsf"] Dec 01 15:02:55 crc kubenswrapper[4637]: E1201 15:02:55.605946 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c798a8-7d64-4298-a2a5-708cc5415258" containerName="ovn-config" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.605960 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c798a8-7d64-4298-a2a5-708cc5415258" containerName="ovn-config" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.606180 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c798a8-7d64-4298-a2a5-708cc5415258" containerName="ovn-config" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.606754 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-569f-account-create-5sxsf" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.609150 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.617536 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-569f-account-create-5sxsf"] Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.622349 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0d53-account-create-fphvj" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.677279 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8b91-account-create-7ngwt"] Dec 01 15:02:55 crc kubenswrapper[4637]: W1201 15:02:55.683387 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod510d80d9_6c2b_4128_bbb7_03c03e1b68dc.slice/crio-ed039d990a955c95f1d1f79662f7543f2afd3aeafa77db5b08cf89e20a3c5539 WatchSource:0}: Error finding container ed039d990a955c95f1d1f79662f7543f2afd3aeafa77db5b08cf89e20a3c5539: Status 404 returned error can't find the container with id ed039d990a955c95f1d1f79662f7543f2afd3aeafa77db5b08cf89e20a3c5539 Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.739975 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h86h7\" (UniqueName: \"kubernetes.io/projected/7381386b-37bd-4588-ae5e-40f50e89e0e2-kube-api-access-h86h7\") pod \"glance-569f-account-create-5sxsf\" (UID: \"7381386b-37bd-4588-ae5e-40f50e89e0e2\") " pod="openstack/glance-569f-account-create-5sxsf" Dec 01 15:02:55 crc kubenswrapper[4637]: E1201 15:02:55.818399 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice/crio-2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice\": RecentStats: unable to find data in memory cache]" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.843429 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h86h7\" (UniqueName: \"kubernetes.io/projected/7381386b-37bd-4588-ae5e-40f50e89e0e2-kube-api-access-h86h7\") pod \"glance-569f-account-create-5sxsf\" (UID: \"7381386b-37bd-4588-ae5e-40f50e89e0e2\") " pod="openstack/glance-569f-account-create-5sxsf" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.872767 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h86h7\" (UniqueName: \"kubernetes.io/projected/7381386b-37bd-4588-ae5e-40f50e89e0e2-kube-api-access-h86h7\") pod \"glance-569f-account-create-5sxsf\" (UID: \"7381386b-37bd-4588-ae5e-40f50e89e0e2\") " pod="openstack/glance-569f-account-create-5sxsf" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.926566 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-569f-account-create-5sxsf" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.980366 4637 generic.go:334] "Generic (PLEG): container finished" podID="510d80d9-6c2b-4128-bbb7-03c03e1b68dc" containerID="5838f6c0f4f9dd58ecad23da69b5b3369fd34bd961a1d5ddf33324acfe19ba5f" exitCode=0 Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.980443 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b91-account-create-7ngwt" event={"ID":"510d80d9-6c2b-4128-bbb7-03c03e1b68dc","Type":"ContainerDied","Data":"5838f6c0f4f9dd58ecad23da69b5b3369fd34bd961a1d5ddf33324acfe19ba5f"} Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.980470 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b91-account-create-7ngwt" event={"ID":"510d80d9-6c2b-4128-bbb7-03c03e1b68dc","Type":"ContainerStarted","Data":"ed039d990a955c95f1d1f79662f7543f2afd3aeafa77db5b08cf89e20a3c5539"} Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.982197 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dhqng-config-zw2jb" event={"ID":"d3c798a8-7d64-4298-a2a5-708cc5415258","Type":"ContainerDied","Data":"a17ef34a68340a36be68c7424407a8ac374241cfd2d0af6f463066468f954535"} Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.982362 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17ef34a68340a36be68c7424407a8ac374241cfd2d0af6f463066468f954535" Dec 01 15:02:55 crc kubenswrapper[4637]: I1201 15:02:55.982420 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dhqng-config-zw2jb" Dec 01 15:02:56 crc kubenswrapper[4637]: I1201 15:02:56.124210 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0d53-account-create-fphvj"] Dec 01 15:02:56 crc kubenswrapper[4637]: W1201 15:02:56.146937 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60b5eb9_075c_48f5_87b1_9bb3b870f589.slice/crio-4475fd53b162ab55f887fa45a46dc7def07074ebd865da8e60ff0aeefe40e903 WatchSource:0}: Error finding container 4475fd53b162ab55f887fa45a46dc7def07074ebd865da8e60ff0aeefe40e903: Status 404 returned error can't find the container with id 4475fd53b162ab55f887fa45a46dc7def07074ebd865da8e60ff0aeefe40e903 Dec 01 15:02:56 crc kubenswrapper[4637]: I1201 15:02:56.392989 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-569f-account-create-5sxsf"] Dec 01 15:02:56 crc kubenswrapper[4637]: I1201 15:02:56.469722 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dhqng-config-zw2jb"] Dec 01 15:02:56 crc kubenswrapper[4637]: I1201 15:02:56.482915 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dhqng-config-zw2jb"] Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.016964 4637 generic.go:334] "Generic (PLEG): container finished" podID="7381386b-37bd-4588-ae5e-40f50e89e0e2" containerID="0ffa74d75fd30ab018d0b5d259c0c28bc2407c91fa05830953e4c28f9454e882" exitCode=0 Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.017563 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-569f-account-create-5sxsf" event={"ID":"7381386b-37bd-4588-ae5e-40f50e89e0e2","Type":"ContainerDied","Data":"0ffa74d75fd30ab018d0b5d259c0c28bc2407c91fa05830953e4c28f9454e882"} Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.017598 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-569f-account-create-5sxsf" event={"ID":"7381386b-37bd-4588-ae5e-40f50e89e0e2","Type":"ContainerStarted","Data":"b58472914cccd2d0f8c6d86274fb97a677e08f5c90dcf97f00f39621ac22fe1f"} Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.020881 4637 generic.go:334] "Generic (PLEG): container finished" podID="d60b5eb9-075c-48f5-87b1-9bb3b870f589" containerID="2241e385dd41df47f36238d2c5f7b9c7afecd216eb0f627979fbbcd052ba5fc4" exitCode=0 Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.020983 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0d53-account-create-fphvj" event={"ID":"d60b5eb9-075c-48f5-87b1-9bb3b870f589","Type":"ContainerDied","Data":"2241e385dd41df47f36238d2c5f7b9c7afecd216eb0f627979fbbcd052ba5fc4"} Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.021008 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0d53-account-create-fphvj" event={"ID":"d60b5eb9-075c-48f5-87b1-9bb3b870f589","Type":"ContainerStarted","Data":"4475fd53b162ab55f887fa45a46dc7def07074ebd865da8e60ff0aeefe40e903"} Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.365702 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.414072 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b91-account-create-7ngwt" Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.578637 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg8sn\" (UniqueName: \"kubernetes.io/projected/510d80d9-6c2b-4128-bbb7-03c03e1b68dc-kube-api-access-mg8sn\") pod \"510d80d9-6c2b-4128-bbb7-03c03e1b68dc\" (UID: \"510d80d9-6c2b-4128-bbb7-03c03e1b68dc\") " Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.586870 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510d80d9-6c2b-4128-bbb7-03c03e1b68dc-kube-api-access-mg8sn" (OuterVolumeSpecName: "kube-api-access-mg8sn") pod "510d80d9-6c2b-4128-bbb7-03c03e1b68dc" (UID: "510d80d9-6c2b-4128-bbb7-03c03e1b68dc"). InnerVolumeSpecName "kube-api-access-mg8sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.682030 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg8sn\" (UniqueName: \"kubernetes.io/projected/510d80d9-6c2b-4128-bbb7-03c03e1b68dc-kube-api-access-mg8sn\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:57 crc kubenswrapper[4637]: I1201 15:02:57.783136 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c798a8-7d64-4298-a2a5-708cc5415258" path="/var/lib/kubelet/pods/d3c798a8-7d64-4298-a2a5-708cc5415258/volumes" Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.032778 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b91-account-create-7ngwt" Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.032791 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b91-account-create-7ngwt" event={"ID":"510d80d9-6c2b-4128-bbb7-03c03e1b68dc","Type":"ContainerDied","Data":"ed039d990a955c95f1d1f79662f7543f2afd3aeafa77db5b08cf89e20a3c5539"} Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.033368 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed039d990a955c95f1d1f79662f7543f2afd3aeafa77db5b08cf89e20a3c5539" Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.034458 4637 generic.go:334] "Generic (PLEG): container finished" podID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerID="8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf" exitCode=0 Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.034699 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8eeaa55a-2c35-480c-baec-134ef1158e66","Type":"ContainerDied","Data":"8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf"} Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.518579 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0d53-account-create-fphvj" Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.524311 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-569f-account-create-5sxsf" Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.705462 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h86h7\" (UniqueName: \"kubernetes.io/projected/7381386b-37bd-4588-ae5e-40f50e89e0e2-kube-api-access-h86h7\") pod \"7381386b-37bd-4588-ae5e-40f50e89e0e2\" (UID: \"7381386b-37bd-4588-ae5e-40f50e89e0e2\") " Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.705503 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b6vr\" (UniqueName: \"kubernetes.io/projected/d60b5eb9-075c-48f5-87b1-9bb3b870f589-kube-api-access-9b6vr\") pod \"d60b5eb9-075c-48f5-87b1-9bb3b870f589\" (UID: \"d60b5eb9-075c-48f5-87b1-9bb3b870f589\") " Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.723294 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7381386b-37bd-4588-ae5e-40f50e89e0e2-kube-api-access-h86h7" (OuterVolumeSpecName: "kube-api-access-h86h7") pod "7381386b-37bd-4588-ae5e-40f50e89e0e2" (UID: "7381386b-37bd-4588-ae5e-40f50e89e0e2"). InnerVolumeSpecName "kube-api-access-h86h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.723347 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60b5eb9-075c-48f5-87b1-9bb3b870f589-kube-api-access-9b6vr" (OuterVolumeSpecName: "kube-api-access-9b6vr") pod "d60b5eb9-075c-48f5-87b1-9bb3b870f589" (UID: "d60b5eb9-075c-48f5-87b1-9bb3b870f589"). InnerVolumeSpecName "kube-api-access-9b6vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.807823 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h86h7\" (UniqueName: \"kubernetes.io/projected/7381386b-37bd-4588-ae5e-40f50e89e0e2-kube-api-access-h86h7\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:58 crc kubenswrapper[4637]: I1201 15:02:58.807861 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b6vr\" (UniqueName: \"kubernetes.io/projected/d60b5eb9-075c-48f5-87b1-9bb3b870f589-kube-api-access-9b6vr\") on node \"crc\" DevicePath \"\"" Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.044112 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-569f-account-create-5sxsf" Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.044110 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-569f-account-create-5sxsf" event={"ID":"7381386b-37bd-4588-ae5e-40f50e89e0e2","Type":"ContainerDied","Data":"b58472914cccd2d0f8c6d86274fb97a677e08f5c90dcf97f00f39621ac22fe1f"} Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.044242 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58472914cccd2d0f8c6d86274fb97a677e08f5c90dcf97f00f39621ac22fe1f" Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.045348 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0d53-account-create-fphvj" event={"ID":"d60b5eb9-075c-48f5-87b1-9bb3b870f589","Type":"ContainerDied","Data":"4475fd53b162ab55f887fa45a46dc7def07074ebd865da8e60ff0aeefe40e903"} Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.045388 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4475fd53b162ab55f887fa45a46dc7def07074ebd865da8e60ff0aeefe40e903" Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.045407 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0d53-account-create-fphvj" Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.046997 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8eeaa55a-2c35-480c-baec-134ef1158e66","Type":"ContainerStarted","Data":"ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1"} Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.047208 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 01 15:02:59 crc kubenswrapper[4637]: I1201 15:02:59.072315 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.158122599 podStartE2EDuration="1m29.072293527s" podCreationTimestamp="2025-12-01 15:01:30 +0000 UTC" firstStartedPulling="2025-12-01 15:01:32.667959174 +0000 UTC m=+943.185668002" lastFinishedPulling="2025-12-01 15:02:24.582130102 +0000 UTC m=+995.099838930" observedRunningTime="2025-12-01 15:02:59.069227484 +0000 UTC m=+1029.586936322" watchObservedRunningTime="2025-12-01 15:02:59.072293527 +0000 UTC m=+1029.590002355" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.740454 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.744985 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2fd7aa8-b5cb-4c3c-976c-210541a77440-etc-swift\") pod \"swift-storage-0\" (UID: \"b2fd7aa8-b5cb-4c3c-976c-210541a77440\") " pod="openstack/swift-storage-0" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.805341 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bhz5d"] Dec 01 15:03:00 crc kubenswrapper[4637]: E1201 15:03:00.805672 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510d80d9-6c2b-4128-bbb7-03c03e1b68dc" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.805699 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="510d80d9-6c2b-4128-bbb7-03c03e1b68dc" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: E1201 15:03:00.805727 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60b5eb9-075c-48f5-87b1-9bb3b870f589" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.805735 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60b5eb9-075c-48f5-87b1-9bb3b870f589" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: E1201 15:03:00.805756 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7381386b-37bd-4588-ae5e-40f50e89e0e2" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.805768 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="7381386b-37bd-4588-ae5e-40f50e89e0e2" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.805915 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="510d80d9-6c2b-4128-bbb7-03c03e1b68dc" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.805962 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60b5eb9-075c-48f5-87b1-9bb3b870f589" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.805972 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="7381386b-37bd-4588-ae5e-40f50e89e0e2" containerName="mariadb-account-create" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.806528 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.816678 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bhz5d"] Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.858180 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-w8bvz" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.858499 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.919311 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.958875 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-config-data\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.958924 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqrn\" (UniqueName: \"kubernetes.io/projected/909f05de-3353-479b-b281-41bdb6d455fd-kube-api-access-msqrn\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.959201 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-db-sync-config-data\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:00 crc kubenswrapper[4637]: I1201 15:03:00.959348 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-combined-ca-bundle\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.060924 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqrn\" (UniqueName: \"kubernetes.io/projected/909f05de-3353-479b-b281-41bdb6d455fd-kube-api-access-msqrn\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.061626 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-db-sync-config-data\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.066479 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-combined-ca-bundle\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.066645 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-config-data\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.073142 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-db-sync-config-data\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.073827 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-config-data\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.103424 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-combined-ca-bundle\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.107100 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqrn\" (UniqueName: \"kubernetes.io/projected/909f05de-3353-479b-b281-41bdb6d455fd-kube-api-access-msqrn\") pod \"glance-db-sync-bhz5d\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.179601 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.600231 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 01 15:03:01 crc kubenswrapper[4637]: W1201 15:03:01.614589 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2fd7aa8_b5cb_4c3c_976c_210541a77440.slice/crio-02811d006c4155c73a62b6640a16a5e139d3ab7d53737723bf4ce000b9079e95 WatchSource:0}: Error finding container 02811d006c4155c73a62b6640a16a5e139d3ab7d53737723bf4ce000b9079e95: Status 404 returned error can't find the container with id 02811d006c4155c73a62b6640a16a5e139d3ab7d53737723bf4ce000b9079e95 Dec 01 15:03:01 crc kubenswrapper[4637]: I1201 15:03:01.840085 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dhqng" Dec 01 15:03:02 crc kubenswrapper[4637]: I1201 15:03:02.077446 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"02811d006c4155c73a62b6640a16a5e139d3ab7d53737723bf4ce000b9079e95"} Dec 01 15:03:02 crc kubenswrapper[4637]: I1201 15:03:02.590268 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bhz5d"] Dec 01 15:03:03 crc kubenswrapper[4637]: I1201 15:03:03.085504 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bhz5d" event={"ID":"909f05de-3353-479b-b281-41bdb6d455fd","Type":"ContainerStarted","Data":"6a239a58597abdb3e20c2ec48a59485a32ab9271659146300bef7d4b824d7c04"} Dec 01 15:03:03 crc kubenswrapper[4637]: I1201 15:03:03.087310 4637 generic.go:334] "Generic (PLEG): container finished" podID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerID="af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f" exitCode=0 Dec 01 15:03:03 crc kubenswrapper[4637]: I1201 15:03:03.087361 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bee806ff-8bec-49d0-a47f-bfd8edbb36fb","Type":"ContainerDied","Data":"af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f"} Dec 01 15:03:04 crc kubenswrapper[4637]: I1201 15:03:04.097004 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bee806ff-8bec-49d0-a47f-bfd8edbb36fb","Type":"ContainerStarted","Data":"ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5"} Dec 01 15:03:04 crc kubenswrapper[4637]: I1201 15:03:04.098268 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:03:04 crc kubenswrapper[4637]: I1201 15:03:04.101020 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"0ebcee27ede1d9267ba71ccc55df62deb856449dbccf4d8600daf4a5038c6574"} Dec 01 15:03:04 crc kubenswrapper[4637]: I1201 15:03:04.101054 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"6aa376247a6ef4cf153a463878aecb0f89a85ac33c2cb2b05c06562a73e04e38"} Dec 01 15:03:04 crc kubenswrapper[4637]: I1201 15:03:04.101066 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"fcbf3d1ffb12247bb87c9a069203cb7fa34cbaf165dfb89feec6d4ca73a21c6c"} Dec 01 15:03:04 crc kubenswrapper[4637]: I1201 15:03:04.101076 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"e639ae3623022ac1e246dfeb6fc98108d938a287c66e904a1d25315fd910b92c"} Dec 01 15:03:04 crc kubenswrapper[4637]: I1201 15:03:04.136098 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371941.7187 podStartE2EDuration="1m35.13607564s" podCreationTimestamp="2025-12-01 15:01:29 +0000 UTC" firstStartedPulling="2025-12-01 15:01:31.989318513 +0000 UTC m=+942.507027341" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:04.134796926 +0000 UTC m=+1034.652505754" watchObservedRunningTime="2025-12-01 15:03:04.13607564 +0000 UTC m=+1034.653784468" Dec 01 15:03:06 crc kubenswrapper[4637]: E1201 15:03:06.109637 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice/crio-2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice\": RecentStats: unable to find data in memory cache]" Dec 01 15:03:06 crc kubenswrapper[4637]: I1201 15:03:06.134397 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"9a1a46b91954eedb63db051cea0466c8b905a2e05e4c41ee6b02fe1d3b9b8fab"} Dec 01 15:03:06 crc kubenswrapper[4637]: I1201 15:03:06.134450 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"0ab992cf97a1a0bf36127eb424031b10ccbd678427e441983086dfaaaa07542a"} Dec 01 15:03:06 crc kubenswrapper[4637]: I1201 15:03:06.134463 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"146ceee4cdf97488aa3a76e0e036c1ad466633f2a9225ff1562b9e21f258ee60"} Dec 01 15:03:07 crc kubenswrapper[4637]: I1201 15:03:07.148550 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"f9c2cd5210aa19f20d1b1ece2e9676db2881ad0fc7e3039899083c2e020a2e8e"} Dec 01 15:03:08 crc kubenswrapper[4637]: I1201 15:03:08.162045 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"9a248ba9f87eadfc72bf423aaddb3416a9513ce7a2e436e96d67d3c9e1e0b34f"} Dec 01 15:03:09 crc kubenswrapper[4637]: I1201 15:03:09.175744 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"3ae601d25531a21356be72ea3e5f83ea96b0acba33ccab810a33a0e39cce4616"} Dec 01 15:03:09 crc kubenswrapper[4637]: I1201 15:03:09.176253 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"d7c6e718d3bd03f6f3787cfca2a631c13cc13147a716a2438a6ffc98be31f9c9"} Dec 01 15:03:10 crc kubenswrapper[4637]: I1201 15:03:10.211443 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"d8ec8317bfaa54df4e1e4f6499eed930a745026c4d34a98ad7bb874a672f3061"} Dec 01 15:03:10 crc kubenswrapper[4637]: I1201 15:03:10.211491 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"ca286711c80cfde06b78bf6d8d8977c03df70735e040532efe13cf3d74076585"} Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.240486 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"a40aa5eab4a911c69780ce6470efe4a77b683f48b66c77f54ed350e3ddbe9206"} Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.241003 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2fd7aa8-b5cb-4c3c-976c-210541a77440","Type":"ContainerStarted","Data":"d71e6c3df10aa2f87b8d58762f4e2e28fe2eb98b00428dcba5a54d0481e8ade7"} Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.737379 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.457511369 podStartE2EDuration="44.737363536s" podCreationTimestamp="2025-12-01 15:02:27 +0000 UTC" firstStartedPulling="2025-12-01 15:03:01.618365072 +0000 UTC m=+1032.136073900" lastFinishedPulling="2025-12-01 15:03:07.898217219 +0000 UTC m=+1038.415926067" observedRunningTime="2025-12-01 15:03:11.291326239 +0000 UTC m=+1041.809035077" watchObservedRunningTime="2025-12-01 15:03:11.737363536 +0000 UTC m=+1042.255072364" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.743360 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-nvk98"] Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.745688 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.760586 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.814543 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-nvk98"] Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.885377 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9lh6\" (UniqueName: \"kubernetes.io/projected/912d084a-f1c0-4389-a7ca-59acd0238493-kube-api-access-b9lh6\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.885849 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.885883 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-config\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.885901 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.885956 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.886002 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.965098 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.988003 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.988078 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-config\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.988098 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.988117 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.988153 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.988191 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9lh6\" (UniqueName: \"kubernetes.io/projected/912d084a-f1c0-4389-a7ca-59acd0238493-kube-api-access-b9lh6\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.989591 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.990302 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-config\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.990394 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.990487 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:11 crc kubenswrapper[4637]: I1201 15:03:11.990901 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.015645 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9lh6\" (UniqueName: \"kubernetes.io/projected/912d084a-f1c0-4389-a7ca-59acd0238493-kube-api-access-b9lh6\") pod \"dnsmasq-dns-6d5b6d6b67-nvk98\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.081130 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.379872 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dbjxn"] Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.382895 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dbjxn" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.396800 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dbjxn"] Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.443906 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6pfn9"] Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.445507 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6pfn9" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.500790 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwtt\" (UniqueName: \"kubernetes.io/projected/0f0c2216-424c-4d29-948c-90418de8b7aa-kube-api-access-bcwtt\") pod \"barbican-db-create-6pfn9\" (UID: \"0f0c2216-424c-4d29-948c-90418de8b7aa\") " pod="openstack/barbican-db-create-6pfn9" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.500871 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvnf\" (UniqueName: \"kubernetes.io/projected/c9fcf440-e6a8-4aea-af11-c65be59ddd4b-kube-api-access-8zvnf\") pod \"cinder-db-create-dbjxn\" (UID: \"c9fcf440-e6a8-4aea-af11-c65be59ddd4b\") " pod="openstack/cinder-db-create-dbjxn" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.510218 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6pfn9"] Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.602576 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvnf\" (UniqueName: \"kubernetes.io/projected/c9fcf440-e6a8-4aea-af11-c65be59ddd4b-kube-api-access-8zvnf\") pod \"cinder-db-create-dbjxn\" (UID: \"c9fcf440-e6a8-4aea-af11-c65be59ddd4b\") " pod="openstack/cinder-db-create-dbjxn" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.603253 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwtt\" (UniqueName: \"kubernetes.io/projected/0f0c2216-424c-4d29-948c-90418de8b7aa-kube-api-access-bcwtt\") pod \"barbican-db-create-6pfn9\" (UID: \"0f0c2216-424c-4d29-948c-90418de8b7aa\") " pod="openstack/barbican-db-create-6pfn9" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.625820 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvnf\" (UniqueName: \"kubernetes.io/projected/c9fcf440-e6a8-4aea-af11-c65be59ddd4b-kube-api-access-8zvnf\") pod \"cinder-db-create-dbjxn\" (UID: \"c9fcf440-e6a8-4aea-af11-c65be59ddd4b\") " pod="openstack/cinder-db-create-dbjxn" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.666856 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwtt\" (UniqueName: \"kubernetes.io/projected/0f0c2216-424c-4d29-948c-90418de8b7aa-kube-api-access-bcwtt\") pod \"barbican-db-create-6pfn9\" (UID: \"0f0c2216-424c-4d29-948c-90418de8b7aa\") " pod="openstack/barbican-db-create-6pfn9" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.708445 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dbjxn" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.749812 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-m6xt2"] Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.751273 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m6xt2" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.788668 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6pfn9" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.799380 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m6xt2"] Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.807070 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2d5\" (UniqueName: \"kubernetes.io/projected/d9ec1661-7b3b-46b7-844c-e0278d64bc38-kube-api-access-hq2d5\") pod \"neutron-db-create-m6xt2\" (UID: \"d9ec1661-7b3b-46b7-844c-e0278d64bc38\") " pod="openstack/neutron-db-create-m6xt2" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.908337 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4t992"] Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.909541 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.912774 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2d5\" (UniqueName: \"kubernetes.io/projected/d9ec1661-7b3b-46b7-844c-e0278d64bc38-kube-api-access-hq2d5\") pod \"neutron-db-create-m6xt2\" (UID: \"d9ec1661-7b3b-46b7-844c-e0278d64bc38\") " pod="openstack/neutron-db-create-m6xt2" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.913071 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jfpqw" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.913284 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.916822 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.916856 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.934172 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4t992"] Dec 01 15:03:12 crc kubenswrapper[4637]: I1201 15:03:12.962875 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2d5\" (UniqueName: \"kubernetes.io/projected/d9ec1661-7b3b-46b7-844c-e0278d64bc38-kube-api-access-hq2d5\") pod \"neutron-db-create-m6xt2\" (UID: \"d9ec1661-7b3b-46b7-844c-e0278d64bc38\") " pod="openstack/neutron-db-create-m6xt2" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.015051 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-combined-ca-bundle\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.015167 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479jk\" (UniqueName: \"kubernetes.io/projected/1cc717b2-b55e-4131-bd83-e041d4811607-kube-api-access-479jk\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.015210 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-config-data\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.072285 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m6xt2" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.117275 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-479jk\" (UniqueName: \"kubernetes.io/projected/1cc717b2-b55e-4131-bd83-e041d4811607-kube-api-access-479jk\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.117348 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-config-data\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.117459 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-combined-ca-bundle\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.123476 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-config-data\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.124670 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-combined-ca-bundle\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.140602 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-479jk\" (UniqueName: \"kubernetes.io/projected/1cc717b2-b55e-4131-bd83-e041d4811607-kube-api-access-479jk\") pod \"keystone-db-sync-4t992\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: I1201 15:03:13.230864 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:13 crc kubenswrapper[4637]: E1201 15:03:13.947868 4637 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:51898->38.102.83.204:38409: write tcp 38.102.83.204:51898->38.102.83.204:38409: write: broken pipe Dec 01 15:03:16 crc kubenswrapper[4637]: E1201 15:03:16.345857 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice/crio-2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice\": RecentStats: unable to find data in memory cache]" Dec 01 15:03:19 crc kubenswrapper[4637]: I1201 15:03:19.699757 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4t992"] Dec 01 15:03:19 crc kubenswrapper[4637]: I1201 15:03:19.795052 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6pfn9"] Dec 01 15:03:19 crc kubenswrapper[4637]: I1201 15:03:19.798653 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dbjxn"] Dec 01 15:03:19 crc kubenswrapper[4637]: I1201 15:03:19.810894 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-nvk98"] Dec 01 15:03:19 crc kubenswrapper[4637]: I1201 15:03:19.887533 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m6xt2"] Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.426275 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bhz5d" event={"ID":"909f05de-3353-479b-b281-41bdb6d455fd","Type":"ContainerStarted","Data":"f3617393c1c8b917e13c2902e4f91b5d908beb6a1b5494113448ae691ce3aa8f"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.428245 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4t992" event={"ID":"1cc717b2-b55e-4131-bd83-e041d4811607","Type":"ContainerStarted","Data":"6e1f51b169f48f89de377724257005ea897f10d70a3cd112556613fd8d994d1d"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.430050 4637 generic.go:334] "Generic (PLEG): container finished" podID="d9ec1661-7b3b-46b7-844c-e0278d64bc38" containerID="eedb9454cd7e20853be9614e10c0327b33f1539bea136c61241c41dd49349346" exitCode=0 Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.430213 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m6xt2" event={"ID":"d9ec1661-7b3b-46b7-844c-e0278d64bc38","Type":"ContainerDied","Data":"eedb9454cd7e20853be9614e10c0327b33f1539bea136c61241c41dd49349346"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.430251 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m6xt2" event={"ID":"d9ec1661-7b3b-46b7-844c-e0278d64bc38","Type":"ContainerStarted","Data":"8027b70e5924a8b6d6b651302f6787f022785e87adb67014fcf406988e3e690c"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.433253 4637 generic.go:334] "Generic (PLEG): container finished" podID="912d084a-f1c0-4389-a7ca-59acd0238493" containerID="f40e86093f45636caa05ca302334f0fb75851c11110d7803f243321d049281b4" exitCode=0 Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.433312 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" event={"ID":"912d084a-f1c0-4389-a7ca-59acd0238493","Type":"ContainerDied","Data":"f40e86093f45636caa05ca302334f0fb75851c11110d7803f243321d049281b4"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.433338 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" event={"ID":"912d084a-f1c0-4389-a7ca-59acd0238493","Type":"ContainerStarted","Data":"cb7e5c47c4e1d74bfbcfa1d67118e85441e303d6c2f9c26a956d17547cc102a0"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.435995 4637 generic.go:334] "Generic (PLEG): container finished" podID="c9fcf440-e6a8-4aea-af11-c65be59ddd4b" containerID="a8a7321851b2fcfb7d364af2d3a9b1f1fa91edf57dfa7a6259656306ddc25c15" exitCode=0 Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.436052 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dbjxn" event={"ID":"c9fcf440-e6a8-4aea-af11-c65be59ddd4b","Type":"ContainerDied","Data":"a8a7321851b2fcfb7d364af2d3a9b1f1fa91edf57dfa7a6259656306ddc25c15"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.436222 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dbjxn" event={"ID":"c9fcf440-e6a8-4aea-af11-c65be59ddd4b","Type":"ContainerStarted","Data":"6ff11bc20e91bfa096599fd2fee59bfb0cd730d2ece69fe9849d639a0e554dfe"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.439606 4637 generic.go:334] "Generic (PLEG): container finished" podID="0f0c2216-424c-4d29-948c-90418de8b7aa" containerID="a72484f80d650f7dd16d7a74a8de8628b186a014324d49699570cb3d6a91e19a" exitCode=0 Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.439641 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6pfn9" event={"ID":"0f0c2216-424c-4d29-948c-90418de8b7aa","Type":"ContainerDied","Data":"a72484f80d650f7dd16d7a74a8de8628b186a014324d49699570cb3d6a91e19a"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.439663 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6pfn9" event={"ID":"0f0c2216-424c-4d29-948c-90418de8b7aa","Type":"ContainerStarted","Data":"a85ede728412ff89d0877f256bcec12a5d335981abd5f891d422e3c513450cb6"} Dec 01 15:03:20 crc kubenswrapper[4637]: I1201 15:03:20.464739 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bhz5d" podStartSLOduration=3.884295087 podStartE2EDuration="20.464707397s" podCreationTimestamp="2025-12-01 15:03:00 +0000 UTC" firstStartedPulling="2025-12-01 15:03:02.611132795 +0000 UTC m=+1033.128841623" lastFinishedPulling="2025-12-01 15:03:19.191545105 +0000 UTC m=+1049.709253933" observedRunningTime="2025-12-01 15:03:20.453391531 +0000 UTC m=+1050.971100379" watchObservedRunningTime="2025-12-01 15:03:20.464707397 +0000 UTC m=+1050.982416235" Dec 01 15:03:21 crc kubenswrapper[4637]: I1201 15:03:21.260117 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:03:21 crc kubenswrapper[4637]: I1201 15:03:21.505355 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" event={"ID":"912d084a-f1c0-4389-a7ca-59acd0238493","Type":"ContainerStarted","Data":"99f908c1d1d89787eee196da3bdfef4cfa942a87498a5ebdeb2f7e271942e027"} Dec 01 15:03:21 crc kubenswrapper[4637]: I1201 15:03:21.505757 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:21 crc kubenswrapper[4637]: I1201 15:03:21.554187 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" podStartSLOduration=10.55417327 podStartE2EDuration="10.55417327s" podCreationTimestamp="2025-12-01 15:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:21.553660326 +0000 UTC m=+1052.071369154" watchObservedRunningTime="2025-12-01 15:03:21.55417327 +0000 UTC m=+1052.071882098" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.141170 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m6xt2" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.155621 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6pfn9" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.160974 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dbjxn" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.213853 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcwtt\" (UniqueName: \"kubernetes.io/projected/0f0c2216-424c-4d29-948c-90418de8b7aa-kube-api-access-bcwtt\") pod \"0f0c2216-424c-4d29-948c-90418de8b7aa\" (UID: \"0f0c2216-424c-4d29-948c-90418de8b7aa\") " Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.214040 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq2d5\" (UniqueName: \"kubernetes.io/projected/d9ec1661-7b3b-46b7-844c-e0278d64bc38-kube-api-access-hq2d5\") pod \"d9ec1661-7b3b-46b7-844c-e0278d64bc38\" (UID: \"d9ec1661-7b3b-46b7-844c-e0278d64bc38\") " Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.214116 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvnf\" (UniqueName: \"kubernetes.io/projected/c9fcf440-e6a8-4aea-af11-c65be59ddd4b-kube-api-access-8zvnf\") pod \"c9fcf440-e6a8-4aea-af11-c65be59ddd4b\" (UID: \"c9fcf440-e6a8-4aea-af11-c65be59ddd4b\") " Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.239301 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ec1661-7b3b-46b7-844c-e0278d64bc38-kube-api-access-hq2d5" (OuterVolumeSpecName: "kube-api-access-hq2d5") pod "d9ec1661-7b3b-46b7-844c-e0278d64bc38" (UID: "d9ec1661-7b3b-46b7-844c-e0278d64bc38"). InnerVolumeSpecName "kube-api-access-hq2d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.239419 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0c2216-424c-4d29-948c-90418de8b7aa-kube-api-access-bcwtt" (OuterVolumeSpecName: "kube-api-access-bcwtt") pod "0f0c2216-424c-4d29-948c-90418de8b7aa" (UID: "0f0c2216-424c-4d29-948c-90418de8b7aa"). InnerVolumeSpecName "kube-api-access-bcwtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.239453 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fcf440-e6a8-4aea-af11-c65be59ddd4b-kube-api-access-8zvnf" (OuterVolumeSpecName: "kube-api-access-8zvnf") pod "c9fcf440-e6a8-4aea-af11-c65be59ddd4b" (UID: "c9fcf440-e6a8-4aea-af11-c65be59ddd4b"). InnerVolumeSpecName "kube-api-access-8zvnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.319227 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq2d5\" (UniqueName: \"kubernetes.io/projected/d9ec1661-7b3b-46b7-844c-e0278d64bc38-kube-api-access-hq2d5\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.319281 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvnf\" (UniqueName: \"kubernetes.io/projected/c9fcf440-e6a8-4aea-af11-c65be59ddd4b-kube-api-access-8zvnf\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.319292 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcwtt\" (UniqueName: \"kubernetes.io/projected/0f0c2216-424c-4d29-948c-90418de8b7aa-kube-api-access-bcwtt\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.509325 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m6xt2" event={"ID":"d9ec1661-7b3b-46b7-844c-e0278d64bc38","Type":"ContainerDied","Data":"8027b70e5924a8b6d6b651302f6787f022785e87adb67014fcf406988e3e690c"} Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.509648 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8027b70e5924a8b6d6b651302f6787f022785e87adb67014fcf406988e3e690c" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.509761 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m6xt2" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.521126 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dbjxn" event={"ID":"c9fcf440-e6a8-4aea-af11-c65be59ddd4b","Type":"ContainerDied","Data":"6ff11bc20e91bfa096599fd2fee59bfb0cd730d2ece69fe9849d639a0e554dfe"} Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.521170 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff11bc20e91bfa096599fd2fee59bfb0cd730d2ece69fe9849d639a0e554dfe" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.521232 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dbjxn" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.532343 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6pfn9" Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.532656 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6pfn9" event={"ID":"0f0c2216-424c-4d29-948c-90418de8b7aa","Type":"ContainerDied","Data":"a85ede728412ff89d0877f256bcec12a5d335981abd5f891d422e3c513450cb6"} Dec 01 15:03:22 crc kubenswrapper[4637]: I1201 15:03:22.532690 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a85ede728412ff89d0877f256bcec12a5d335981abd5f891d422e3c513450cb6" Dec 01 15:03:26 crc kubenswrapper[4637]: I1201 15:03:26.577018 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4t992" event={"ID":"1cc717b2-b55e-4131-bd83-e041d4811607","Type":"ContainerStarted","Data":"2aed1866c2396d79f75ebc786b2693962e14643e0e5c5847a1182b8fde5dee21"} Dec 01 15:03:26 crc kubenswrapper[4637]: E1201 15:03:26.584536 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice/crio-2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice\": RecentStats: unable to find data in memory cache]" Dec 01 15:03:26 crc kubenswrapper[4637]: I1201 15:03:26.610361 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4t992" podStartSLOduration=8.277108433 podStartE2EDuration="14.610289135s" podCreationTimestamp="2025-12-01 15:03:12 +0000 UTC" firstStartedPulling="2025-12-01 15:03:19.740387429 +0000 UTC m=+1050.258096257" lastFinishedPulling="2025-12-01 15:03:26.073568131 +0000 UTC m=+1056.591276959" observedRunningTime="2025-12-01 15:03:26.602588366 +0000 UTC m=+1057.120297204" watchObservedRunningTime="2025-12-01 15:03:26.610289135 +0000 UTC m=+1057.127998013" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.084116 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.162740 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5qcft"] Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.162983 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" podUID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" containerName="dnsmasq-dns" containerID="cri-o://6654a325edd98e1a49035ff61887354a439bbee24230e33ffa8e4b5152e71caa" gracePeriod=10 Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.588020 4637 generic.go:334] "Generic (PLEG): container finished" podID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" containerID="6654a325edd98e1a49035ff61887354a439bbee24230e33ffa8e4b5152e71caa" exitCode=0 Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.589758 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" event={"ID":"729b2ee4-c622-4dea-86b7-94fffda9ed7f","Type":"ContainerDied","Data":"6654a325edd98e1a49035ff61887354a439bbee24230e33ffa8e4b5152e71caa"} Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.706655 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.726086 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-nb\") pod \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.726133 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mfk7\" (UniqueName: \"kubernetes.io/projected/729b2ee4-c622-4dea-86b7-94fffda9ed7f-kube-api-access-6mfk7\") pod \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.726250 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-config\") pod \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.726307 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-sb\") pod \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.726342 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-dns-svc\") pod \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\" (UID: \"729b2ee4-c622-4dea-86b7-94fffda9ed7f\") " Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.742947 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729b2ee4-c622-4dea-86b7-94fffda9ed7f-kube-api-access-6mfk7" (OuterVolumeSpecName: "kube-api-access-6mfk7") pod "729b2ee4-c622-4dea-86b7-94fffda9ed7f" (UID: "729b2ee4-c622-4dea-86b7-94fffda9ed7f"). InnerVolumeSpecName "kube-api-access-6mfk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.814569 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "729b2ee4-c622-4dea-86b7-94fffda9ed7f" (UID: "729b2ee4-c622-4dea-86b7-94fffda9ed7f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.839647 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.839700 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mfk7\" (UniqueName: \"kubernetes.io/projected/729b2ee4-c622-4dea-86b7-94fffda9ed7f-kube-api-access-6mfk7\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.840419 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-config" (OuterVolumeSpecName: "config") pod "729b2ee4-c622-4dea-86b7-94fffda9ed7f" (UID: "729b2ee4-c622-4dea-86b7-94fffda9ed7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.849552 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "729b2ee4-c622-4dea-86b7-94fffda9ed7f" (UID: "729b2ee4-c622-4dea-86b7-94fffda9ed7f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.859772 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "729b2ee4-c622-4dea-86b7-94fffda9ed7f" (UID: "729b2ee4-c622-4dea-86b7-94fffda9ed7f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.941766 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.941813 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:27 crc kubenswrapper[4637]: I1201 15:03:27.941826 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729b2ee4-c622-4dea-86b7-94fffda9ed7f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:28 crc kubenswrapper[4637]: I1201 15:03:28.599089 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" Dec 01 15:03:28 crc kubenswrapper[4637]: I1201 15:03:28.599087 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5qcft" event={"ID":"729b2ee4-c622-4dea-86b7-94fffda9ed7f","Type":"ContainerDied","Data":"a3036bef201de54135164f9572c8ddfdb976c8c9cd05fa1b9bbe382462a52230"} Dec 01 15:03:28 crc kubenswrapper[4637]: I1201 15:03:28.599233 4637 scope.go:117] "RemoveContainer" containerID="6654a325edd98e1a49035ff61887354a439bbee24230e33ffa8e4b5152e71caa" Dec 01 15:03:28 crc kubenswrapper[4637]: I1201 15:03:28.627898 4637 scope.go:117] "RemoveContainer" containerID="50acc7c1ecc7f56bbe660aed7e1997c6e1db4e496a8fd1818bc3c852d5afa506" Dec 01 15:03:28 crc kubenswrapper[4637]: I1201 15:03:28.674227 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5qcft"] Dec 01 15:03:28 crc kubenswrapper[4637]: I1201 15:03:28.682861 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5qcft"] Dec 01 15:03:29 crc kubenswrapper[4637]: I1201 15:03:29.615311 4637 generic.go:334] "Generic (PLEG): container finished" podID="1cc717b2-b55e-4131-bd83-e041d4811607" containerID="2aed1866c2396d79f75ebc786b2693962e14643e0e5c5847a1182b8fde5dee21" exitCode=0 Dec 01 15:03:29 crc kubenswrapper[4637]: I1201 15:03:29.615378 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4t992" event={"ID":"1cc717b2-b55e-4131-bd83-e041d4811607","Type":"ContainerDied","Data":"2aed1866c2396d79f75ebc786b2693962e14643e0e5c5847a1182b8fde5dee21"} Dec 01 15:03:29 crc kubenswrapper[4637]: I1201 15:03:29.787916 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" path="/var/lib/kubelet/pods/729b2ee4-c622-4dea-86b7-94fffda9ed7f/volumes" Dec 01 15:03:30 crc kubenswrapper[4637]: I1201 15:03:30.623759 4637 generic.go:334] "Generic (PLEG): container finished" podID="909f05de-3353-479b-b281-41bdb6d455fd" containerID="f3617393c1c8b917e13c2902e4f91b5d908beb6a1b5494113448ae691ce3aa8f" exitCode=0 Dec 01 15:03:30 crc kubenswrapper[4637]: I1201 15:03:30.624058 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bhz5d" event={"ID":"909f05de-3353-479b-b281-41bdb6d455fd","Type":"ContainerDied","Data":"f3617393c1c8b917e13c2902e4f91b5d908beb6a1b5494113448ae691ce3aa8f"} Dec 01 15:03:30 crc kubenswrapper[4637]: I1201 15:03:30.986387 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.001559 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-479jk\" (UniqueName: \"kubernetes.io/projected/1cc717b2-b55e-4131-bd83-e041d4811607-kube-api-access-479jk\") pod \"1cc717b2-b55e-4131-bd83-e041d4811607\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.001610 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-combined-ca-bundle\") pod \"1cc717b2-b55e-4131-bd83-e041d4811607\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.012215 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc717b2-b55e-4131-bd83-e041d4811607-kube-api-access-479jk" (OuterVolumeSpecName: "kube-api-access-479jk") pod "1cc717b2-b55e-4131-bd83-e041d4811607" (UID: "1cc717b2-b55e-4131-bd83-e041d4811607"). InnerVolumeSpecName "kube-api-access-479jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.031208 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cc717b2-b55e-4131-bd83-e041d4811607" (UID: "1cc717b2-b55e-4131-bd83-e041d4811607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.102512 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-config-data\") pod \"1cc717b2-b55e-4131-bd83-e041d4811607\" (UID: \"1cc717b2-b55e-4131-bd83-e041d4811607\") " Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.103117 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-479jk\" (UniqueName: \"kubernetes.io/projected/1cc717b2-b55e-4131-bd83-e041d4811607-kube-api-access-479jk\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.103216 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.158644 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-config-data" (OuterVolumeSpecName: "config-data") pod "1cc717b2-b55e-4131-bd83-e041d4811607" (UID: "1cc717b2-b55e-4131-bd83-e041d4811607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.205108 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc717b2-b55e-4131-bd83-e041d4811607-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.639576 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4t992" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.642112 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4t992" event={"ID":"1cc717b2-b55e-4131-bd83-e041d4811607","Type":"ContainerDied","Data":"6e1f51b169f48f89de377724257005ea897f10d70a3cd112556613fd8d994d1d"} Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.642190 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1f51b169f48f89de377724257005ea897f10d70a3cd112556613fd8d994d1d" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941075 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-wtsst"] Dec 01 15:03:31 crc kubenswrapper[4637]: E1201 15:03:31.941456 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0c2216-424c-4d29-948c-90418de8b7aa" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941474 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0c2216-424c-4d29-948c-90418de8b7aa" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: E1201 15:03:31.941495 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc717b2-b55e-4131-bd83-e041d4811607" containerName="keystone-db-sync" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941500 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc717b2-b55e-4131-bd83-e041d4811607" containerName="keystone-db-sync" Dec 01 15:03:31 crc kubenswrapper[4637]: E1201 15:03:31.941511 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" containerName="init" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941516 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" containerName="init" Dec 01 15:03:31 crc kubenswrapper[4637]: E1201 15:03:31.941533 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" containerName="dnsmasq-dns" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941538 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" containerName="dnsmasq-dns" Dec 01 15:03:31 crc kubenswrapper[4637]: E1201 15:03:31.941550 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ec1661-7b3b-46b7-844c-e0278d64bc38" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941556 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ec1661-7b3b-46b7-844c-e0278d64bc38" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: E1201 15:03:31.941574 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fcf440-e6a8-4aea-af11-c65be59ddd4b" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941580 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fcf440-e6a8-4aea-af11-c65be59ddd4b" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941751 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="729b2ee4-c622-4dea-86b7-94fffda9ed7f" containerName="dnsmasq-dns" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941761 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ec1661-7b3b-46b7-844c-e0278d64bc38" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941776 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f0c2216-424c-4d29-948c-90418de8b7aa" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941785 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fcf440-e6a8-4aea-af11-c65be59ddd4b" containerName="mariadb-database-create" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.941798 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc717b2-b55e-4131-bd83-e041d4811607" containerName="keystone-db-sync" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.942777 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.959806 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j85h5"] Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.960839 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.986067 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jfpqw" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.986446 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.990289 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 15:03:31 crc kubenswrapper[4637]: I1201 15:03:31.994728 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.013266 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-wtsst"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.054015 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j85h5"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.087524 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.087663 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-credential-keys\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.087710 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvlx5\" (UniqueName: \"kubernetes.io/projected/a9d588ec-741e-4df6-94e1-a824c312d598-kube-api-access-pvlx5\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.087780 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-config-data\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.087812 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.087846 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-fernet-keys\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.087922 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-combined-ca-bundle\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.087989 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqrn\" (UniqueName: \"kubernetes.io/projected/6def64d4-9e4d-4e24-9815-ea2de38309ff-kube-api-access-xfqrn\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.088030 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-config\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.088069 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-scripts\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.088111 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.088137 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.192346 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvlx5\" (UniqueName: \"kubernetes.io/projected/a9d588ec-741e-4df6-94e1-a824c312d598-kube-api-access-pvlx5\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.192775 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-config-data\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.192806 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.192831 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-fernet-keys\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.192881 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-combined-ca-bundle\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.192914 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfqrn\" (UniqueName: \"kubernetes.io/projected/6def64d4-9e4d-4e24-9815-ea2de38309ff-kube-api-access-xfqrn\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.192968 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-config\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.193006 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-scripts\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.193038 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.193061 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.193140 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.193212 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-credential-keys\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.194409 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.195153 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-config\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.196254 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6bfc57fb6f-w2dxg"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.205363 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.205789 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.206519 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.209436 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.210860 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-credential-keys\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.211360 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-config-data\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.211875 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.213161 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.213402 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jhhr7" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.213556 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.216866 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-scripts\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.224880 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-combined-ca-bundle\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.234806 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bfc57fb6f-w2dxg"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.239791 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-fernet-keys\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.248680 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvlx5\" (UniqueName: \"kubernetes.io/projected/a9d588ec-741e-4df6-94e1-a824c312d598-kube-api-access-pvlx5\") pod \"keystone-bootstrap-j85h5\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.278855 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfqrn\" (UniqueName: \"kubernetes.io/projected/6def64d4-9e4d-4e24-9815-ea2de38309ff-kube-api-access-xfqrn\") pod \"dnsmasq-dns-6f8c45789f-wtsst\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.315146 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.317258 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.383177 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-wtsst"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.410130 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.423720 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-config-data\") pod \"909f05de-3353-479b-b281-41bdb6d455fd\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.423777 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msqrn\" (UniqueName: \"kubernetes.io/projected/909f05de-3353-479b-b281-41bdb6d455fd-kube-api-access-msqrn\") pod \"909f05de-3353-479b-b281-41bdb6d455fd\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.423823 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-db-sync-config-data\") pod \"909f05de-3353-479b-b281-41bdb6d455fd\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.423841 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-combined-ca-bundle\") pod \"909f05de-3353-479b-b281-41bdb6d455fd\" (UID: \"909f05de-3353-479b-b281-41bdb6d455fd\") " Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.423948 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09cbc5ac-7259-494d-8c1c-5d25eac1161c-logs\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.423981 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82zp9\" (UniqueName: \"kubernetes.io/projected/09cbc5ac-7259-494d-8c1c-5d25eac1161c-kube-api-access-82zp9\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.424045 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/09cbc5ac-7259-494d-8c1c-5d25eac1161c-horizon-secret-key\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.424076 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-config-data\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.424105 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-scripts\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.457020 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-sb6hr"] Dec 01 15:03:32 crc kubenswrapper[4637]: E1201 15:03:32.457800 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909f05de-3353-479b-b281-41bdb6d455fd" containerName="glance-db-sync" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.465543 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="909f05de-3353-479b-b281-41bdb6d455fd" containerName="glance-db-sync" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.466195 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="909f05de-3353-479b-b281-41bdb6d455fd" containerName="glance-db-sync" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.470138 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.483690 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909f05de-3353-479b-b281-41bdb6d455fd-kube-api-access-msqrn" (OuterVolumeSpecName: "kube-api-access-msqrn") pod "909f05de-3353-479b-b281-41bdb6d455fd" (UID: "909f05de-3353-479b-b281-41bdb6d455fd"). InnerVolumeSpecName "kube-api-access-msqrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.485198 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "909f05de-3353-479b-b281-41bdb6d455fd" (UID: "909f05de-3353-479b-b281-41bdb6d455fd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.490980 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.491247 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z6vzs" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.491395 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.501487 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-cnc8z"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.503577 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.526166 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgsw\" (UniqueName: \"kubernetes.io/projected/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-kube-api-access-llgsw\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.534100 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xs9s\" (UniqueName: \"kubernetes.io/projected/ecec3227-52bd-4b05-83ac-90218117a222-kube-api-access-7xs9s\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.534369 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.534449 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.534596 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-combined-ca-bundle\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.534680 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-config-data\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.534786 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/09cbc5ac-7259-494d-8c1c-5d25eac1161c-horizon-secret-key\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.534893 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-scripts\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.534994 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-config-data\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535084 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-config\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535203 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-scripts\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535293 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535373 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecec3227-52bd-4b05-83ac-90218117a222-logs\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535450 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535528 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09cbc5ac-7259-494d-8c1c-5d25eac1161c-logs\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535640 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82zp9\" (UniqueName: \"kubernetes.io/projected/09cbc5ac-7259-494d-8c1c-5d25eac1161c-kube-api-access-82zp9\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535772 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msqrn\" (UniqueName: \"kubernetes.io/projected/909f05de-3353-479b-b281-41bdb6d455fd-kube-api-access-msqrn\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.535839 4637 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.536726 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09cbc5ac-7259-494d-8c1c-5d25eac1161c-logs\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.541810 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-scripts\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.542373 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-sb6hr"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.543243 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-config-data\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.564228 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/09cbc5ac-7259-494d-8c1c-5d25eac1161c-horizon-secret-key\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.608243 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-cnc8z"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.622077 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82zp9\" (UniqueName: \"kubernetes.io/projected/09cbc5ac-7259-494d-8c1c-5d25eac1161c-kube-api-access-82zp9\") pod \"horizon-6bfc57fb6f-w2dxg\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.629094 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "909f05de-3353-479b-b281-41bdb6d455fd" (UID: "909f05de-3353-479b-b281-41bdb6d455fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.636986 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-scripts\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637038 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-config\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637083 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637114 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecec3227-52bd-4b05-83ac-90218117a222-logs\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637137 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637206 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgsw\" (UniqueName: \"kubernetes.io/projected/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-kube-api-access-llgsw\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637236 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xs9s\" (UniqueName: \"kubernetes.io/projected/ecec3227-52bd-4b05-83ac-90218117a222-kube-api-access-7xs9s\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637252 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637269 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637303 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-combined-ca-bundle\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637321 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-config-data\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.637370 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.639021 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.640091 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.640859 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.642018 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-config-data\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.646568 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-config\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.646689 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecec3227-52bd-4b05-83ac-90218117a222-logs\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.643923 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.650844 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-scripts\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.653915 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-combined-ca-bundle\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.664850 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgsw\" (UniqueName: \"kubernetes.io/projected/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-kube-api-access-llgsw\") pod \"dnsmasq-dns-fcfdd6f9f-cnc8z\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.680524 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xs9s\" (UniqueName: \"kubernetes.io/projected/ecec3227-52bd-4b05-83ac-90218117a222-kube-api-access-7xs9s\") pod \"placement-db-sync-sb6hr\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.697807 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.698913 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f65455bb9-dgzkv"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.701557 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.708218 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bhz5d" event={"ID":"909f05de-3353-479b-b281-41bdb6d455fd","Type":"ContainerDied","Data":"6a239a58597abdb3e20c2ec48a59485a32ab9271659146300bef7d4b824d7c04"} Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.708269 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a239a58597abdb3e20c2ec48a59485a32ab9271659146300bef7d4b824d7c04" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.708357 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bhz5d" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.724207 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-config-data" (OuterVolumeSpecName: "config-data") pod "909f05de-3353-479b-b281-41bdb6d455fd" (UID: "909f05de-3353-479b-b281-41bdb6d455fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.727422 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a37f-account-create-cjk64"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.728968 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a37f-account-create-cjk64" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.742016 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn555\" (UniqueName: \"kubernetes.io/projected/61c23c76-9613-401c-aff5-5ad572188e85-kube-api-access-pn555\") pod \"cinder-a37f-account-create-cjk64\" (UID: \"61c23c76-9613-401c-aff5-5ad572188e85\") " pod="openstack/cinder-a37f-account-create-cjk64" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.742076 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9689152e-e8b0-40ab-a2e6-d0441160c13a-horizon-secret-key\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.742149 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-config-data\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.742173 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9689152e-e8b0-40ab-a2e6-d0441160c13a-logs\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.742197 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-scripts\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.742214 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25sxn\" (UniqueName: \"kubernetes.io/projected/9689152e-e8b0-40ab-a2e6-d0441160c13a-kube-api-access-25sxn\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.742264 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909f05de-3353-479b-b281-41bdb6d455fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.745944 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.770104 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f65455bb9-dgzkv"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.825014 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a37f-account-create-cjk64"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.837368 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sb6hr" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.848136 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.850248 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.850762 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-config-data\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.850811 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9689152e-e8b0-40ab-a2e6-d0441160c13a-logs\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.850834 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-scripts\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.850852 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25sxn\" (UniqueName: \"kubernetes.io/projected/9689152e-e8b0-40ab-a2e6-d0441160c13a-kube-api-access-25sxn\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.851014 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn555\" (UniqueName: \"kubernetes.io/projected/61c23c76-9613-401c-aff5-5ad572188e85-kube-api-access-pn555\") pod \"cinder-a37f-account-create-cjk64\" (UID: \"61c23c76-9613-401c-aff5-5ad572188e85\") " pod="openstack/cinder-a37f-account-create-cjk64" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.851039 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9689152e-e8b0-40ab-a2e6-d0441160c13a-horizon-secret-key\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.854954 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-config-data\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.855275 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9689152e-e8b0-40ab-a2e6-d0441160c13a-logs\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.855863 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-scripts\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.860660 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.869178 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9689152e-e8b0-40ab-a2e6-d0441160c13a-horizon-secret-key\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.892912 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.893196 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.907388 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25sxn\" (UniqueName: \"kubernetes.io/projected/9689152e-e8b0-40ab-a2e6-d0441160c13a-kube-api-access-25sxn\") pod \"horizon-6f65455bb9-dgzkv\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.915234 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.954363 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-config-data\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.954727 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-scripts\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.954761 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nsfv\" (UniqueName: \"kubernetes.io/projected/604935ee-aa8a-461e-9bd9-f11ad29128e0-kube-api-access-9nsfv\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.954789 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-log-httpd\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.954808 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.954863 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-run-httpd\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:32 crc kubenswrapper[4637]: I1201 15:03:32.954887 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.012100 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn555\" (UniqueName: \"kubernetes.io/projected/61c23c76-9613-401c-aff5-5ad572188e85-kube-api-access-pn555\") pod \"cinder-a37f-account-create-cjk64\" (UID: \"61c23c76-9613-401c-aff5-5ad572188e85\") " pod="openstack/cinder-a37f-account-create-cjk64" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.019001 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f547-account-create-brb84"] Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.020805 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f547-account-create-brb84" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.030778 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.040609 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f547-account-create-brb84"] Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.057454 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.058877 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-scripts\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.058906 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nsfv\" (UniqueName: \"kubernetes.io/projected/604935ee-aa8a-461e-9bd9-f11ad29128e0-kube-api-access-9nsfv\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.058948 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-log-httpd\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.058965 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.059016 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-run-httpd\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.059030 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.059072 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxvv\" (UniqueName: \"kubernetes.io/projected/9c657096-ed3a-4b49-a646-4ebad0261998-kube-api-access-lzxvv\") pod \"barbican-f547-account-create-brb84\" (UID: \"9c657096-ed3a-4b49-a646-4ebad0261998\") " pod="openstack/barbican-f547-account-create-brb84" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.059113 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-config-data\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.065774 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-scripts\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.066071 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-log-httpd\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.070502 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-run-httpd\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.074604 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.084524 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.086066 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cd96-account-create-kx994"] Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.087573 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd96-account-create-kx994" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.095853 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-config-data\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.107476 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a37f-account-create-cjk64" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.112795 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cd96-account-create-kx994"] Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.124070 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.125032 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nsfv\" (UniqueName: \"kubernetes.io/projected/604935ee-aa8a-461e-9bd9-f11ad29128e0-kube-api-access-9nsfv\") pod \"ceilometer-0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.164504 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxvv\" (UniqueName: \"kubernetes.io/projected/9c657096-ed3a-4b49-a646-4ebad0261998-kube-api-access-lzxvv\") pod \"barbican-f547-account-create-brb84\" (UID: \"9c657096-ed3a-4b49-a646-4ebad0261998\") " pod="openstack/barbican-f547-account-create-brb84" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.164785 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcpg\" (UniqueName: \"kubernetes.io/projected/a2866484-72b3-4826-b45d-f015df568ee1-kube-api-access-9zcpg\") pod \"neutron-cd96-account-create-kx994\" (UID: \"a2866484-72b3-4826-b45d-f015df568ee1\") " pod="openstack/neutron-cd96-account-create-kx994" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.199567 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxvv\" (UniqueName: \"kubernetes.io/projected/9c657096-ed3a-4b49-a646-4ebad0261998-kube-api-access-lzxvv\") pod \"barbican-f547-account-create-brb84\" (UID: \"9c657096-ed3a-4b49-a646-4ebad0261998\") " pod="openstack/barbican-f547-account-create-brb84" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.223476 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.266894 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zcpg\" (UniqueName: \"kubernetes.io/projected/a2866484-72b3-4826-b45d-f015df568ee1-kube-api-access-9zcpg\") pod \"neutron-cd96-account-create-kx994\" (UID: \"a2866484-72b3-4826-b45d-f015df568ee1\") " pod="openstack/neutron-cd96-account-create-kx994" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.338722 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zcpg\" (UniqueName: \"kubernetes.io/projected/a2866484-72b3-4826-b45d-f015df568ee1-kube-api-access-9zcpg\") pod \"neutron-cd96-account-create-kx994\" (UID: \"a2866484-72b3-4826-b45d-f015df568ee1\") " pod="openstack/neutron-cd96-account-create-kx994" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.339833 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-cnc8z"] Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.472315 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f547-account-create-brb84" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.510867 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-tzb4m"] Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.540011 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.541951 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd96-account-create-kx994" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.570794 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-tzb4m"] Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.748010 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.748057 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-kube-api-access-4ggdv\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.748078 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-config\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.748108 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.748155 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.748220 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.851822 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.852204 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.852265 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.852282 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-kube-api-access-4ggdv\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.852300 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-config\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.852323 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.853202 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.853692 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-config\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.854046 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.855306 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.856542 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.879265 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-kube-api-access-4ggdv\") pod \"dnsmasq-dns-57c957c4ff-tzb4m\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.893115 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-wtsst"] Dec 01 15:03:33 crc kubenswrapper[4637]: W1201 15:03:33.902288 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6def64d4_9e4d_4e24_9815_ea2de38309ff.slice/crio-54778c4412519965620858db338ee5adbf9d13f215739a25186f3585570b1741 WatchSource:0}: Error finding container 54778c4412519965620858db338ee5adbf9d13f215739a25186f3585570b1741: Status 404 returned error can't find the container with id 54778c4412519965620858db338ee5adbf9d13f215739a25186f3585570b1741 Dec 01 15:03:33 crc kubenswrapper[4637]: I1201 15:03:33.925355 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.110890 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j85h5"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.155511 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bfc57fb6f-w2dxg"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.254589 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-cnc8z"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.298190 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a37f-account-create-cjk64"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.424046 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f65455bb9-dgzkv"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.433951 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-sb6hr"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.530364 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.532539 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.538853 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-w8bvz" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.539164 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.545478 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.557524 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.644999 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.670786 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk28c\" (UniqueName: \"kubernetes.io/projected/c9e23a67-fbfc-4f32-ab91-404522460d90-kube-api-access-dk28c\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.670848 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.670912 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.671066 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.671087 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-logs\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.671100 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.671156 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.678780 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cd96-account-create-kx994"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.738252 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f547-account-create-brb84"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.774069 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.774493 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk28c\" (UniqueName: \"kubernetes.io/projected/c9e23a67-fbfc-4f32-ab91-404522460d90-kube-api-access-dk28c\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.774525 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.774561 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.774614 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.774632 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.774647 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-logs\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.775709 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.794598 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sb6hr" event={"ID":"ecec3227-52bd-4b05-83ac-90218117a222","Type":"ContainerStarted","Data":"635f9aa7fa26165fcb77db2b164d2e0b58ab2bdd41305493af81cad4ebdc6537"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.814082 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bfc57fb6f-w2dxg" event={"ID":"09cbc5ac-7259-494d-8c1c-5d25eac1161c","Type":"ContainerStarted","Data":"8caf538f4c83e3ae19dc58ffaa4ac7dada1bb54e5d7d8ea8cac7a60103c98b6c"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.815216 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-logs\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.815245 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.817838 4637 generic.go:334] "Generic (PLEG): container finished" podID="6def64d4-9e4d-4e24-9815-ea2de38309ff" containerID="6885b7965aaba5759b5594ad0a22f3cf98a983fd7a96d54ecc2cdf9311763671" exitCode=0 Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.817895 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" event={"ID":"6def64d4-9e4d-4e24-9815-ea2de38309ff","Type":"ContainerDied","Data":"6885b7965aaba5759b5594ad0a22f3cf98a983fd7a96d54ecc2cdf9311763671"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.817915 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" event={"ID":"6def64d4-9e4d-4e24-9815-ea2de38309ff","Type":"ContainerStarted","Data":"54778c4412519965620858db338ee5adbf9d13f215739a25186f3585570b1741"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.819744 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd96-account-create-kx994" event={"ID":"a2866484-72b3-4826-b45d-f015df568ee1","Type":"ContainerStarted","Data":"0dbe0a6662fba2a0b2f0f41491bacbeba46ab93e62e0e6f170aa2c629ceaf0e7"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.821086 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a37f-account-create-cjk64" event={"ID":"61c23c76-9613-401c-aff5-5ad572188e85","Type":"ContainerStarted","Data":"30b7bb7e9cf3a9282f5b75e2064539cc3e396330a0d917c63335e38bf91663f9"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.821108 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a37f-account-create-cjk64" event={"ID":"61c23c76-9613-401c-aff5-5ad572188e85","Type":"ContainerStarted","Data":"0360c6916e89cff0c4b1ff2beddaca7c2158a4f732f278ba692f4bb7cf892cfa"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.827834 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f65455bb9-dgzkv" event={"ID":"9689152e-e8b0-40ab-a2e6-d0441160c13a","Type":"ContainerStarted","Data":"7a9fd9c915fc4599000aeef05e64606ad675b7de0229b4c8b7f3df5ca71cc32a"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.833097 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j85h5" event={"ID":"a9d588ec-741e-4df6-94e1-a824c312d598","Type":"ContainerStarted","Data":"15dc05e4253bfaee5d3e148c008f60d4dfa934c274ce587be883c00844f3a7d8"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.833155 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j85h5" event={"ID":"a9d588ec-741e-4df6-94e1-a824c312d598","Type":"ContainerStarted","Data":"2eb056e786119521940ef60ae03838c596a50ae427e35f2cb7241ec214116d38"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.835623 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerStarted","Data":"6e3c037800958b3e108ef40cf3e8cacf27545836826a57def6bb450382341103"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.836734 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" event={"ID":"9e3ec383-3c69-411d-a2d6-2c3ba57f5259","Type":"ContainerStarted","Data":"1a6e38013136c4abbeb45c783be983df824cd7600a551af5950d46a24175fdae"} Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.865689 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.875049 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk28c\" (UniqueName: \"kubernetes.io/projected/c9e23a67-fbfc-4f32-ab91-404522460d90-kube-api-access-dk28c\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.877271 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.885485 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j85h5" podStartSLOduration=3.885465915 podStartE2EDuration="3.885465915s" podCreationTimestamp="2025-12-01 15:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:34.881284642 +0000 UTC m=+1065.398993470" watchObservedRunningTime="2025-12-01 15:03:34.885465915 +0000 UTC m=+1065.403174743" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.893718 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.897452 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.920492 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a37f-account-create-cjk64" podStartSLOduration=2.920468674 podStartE2EDuration="2.920468674s" podCreationTimestamp="2025-12-01 15:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:34.902730613 +0000 UTC m=+1065.420439441" watchObservedRunningTime="2025-12-01 15:03:34.920468674 +0000 UTC m=+1065.438177502" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.954174 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-tzb4m"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.980072 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.983472 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.989764 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 15:03:34 crc kubenswrapper[4637]: I1201 15:03:34.999658 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.185843 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.186324 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.186367 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.186408 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.186487 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj49l\" (UniqueName: \"kubernetes.io/projected/91fed8c1-7b36-4b01-8edb-35fe65411790-kube-api-access-bj49l\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.186519 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.186543 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-logs\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.198135 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.290183 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj49l\" (UniqueName: \"kubernetes.io/projected/91fed8c1-7b36-4b01-8edb-35fe65411790-kube-api-access-bj49l\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.290237 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.290267 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-logs\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.290313 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.290336 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.290364 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.290394 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.291190 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.295899 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.297583 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-logs\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.298860 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.309361 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.319179 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj49l\" (UniqueName: \"kubernetes.io/projected/91fed8c1-7b36-4b01-8edb-35fe65411790-kube-api-access-bj49l\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.322994 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.362991 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:35 crc kubenswrapper[4637]: I1201 15:03:35.430038 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.499852 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-nb\") pod \"6def64d4-9e4d-4e24-9815-ea2de38309ff\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.499956 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfqrn\" (UniqueName: \"kubernetes.io/projected/6def64d4-9e4d-4e24-9815-ea2de38309ff-kube-api-access-xfqrn\") pod \"6def64d4-9e4d-4e24-9815-ea2de38309ff\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.499983 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-svc\") pod \"6def64d4-9e4d-4e24-9815-ea2de38309ff\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.500016 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-swift-storage-0\") pod \"6def64d4-9e4d-4e24-9815-ea2de38309ff\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.500045 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-config\") pod \"6def64d4-9e4d-4e24-9815-ea2de38309ff\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.500065 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-sb\") pod \"6def64d4-9e4d-4e24-9815-ea2de38309ff\" (UID: \"6def64d4-9e4d-4e24-9815-ea2de38309ff\") " Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.535691 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6def64d4-9e4d-4e24-9815-ea2de38309ff-kube-api-access-xfqrn" (OuterVolumeSpecName: "kube-api-access-xfqrn") pod "6def64d4-9e4d-4e24-9815-ea2de38309ff" (UID: "6def64d4-9e4d-4e24-9815-ea2de38309ff"). InnerVolumeSpecName "kube-api-access-xfqrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.555554 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6def64d4-9e4d-4e24-9815-ea2de38309ff" (UID: "6def64d4-9e4d-4e24-9815-ea2de38309ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.560715 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6def64d4-9e4d-4e24-9815-ea2de38309ff" (UID: "6def64d4-9e4d-4e24-9815-ea2de38309ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.561237 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6def64d4-9e4d-4e24-9815-ea2de38309ff" (UID: "6def64d4-9e4d-4e24-9815-ea2de38309ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.561412 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6def64d4-9e4d-4e24-9815-ea2de38309ff" (UID: "6def64d4-9e4d-4e24-9815-ea2de38309ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.579297 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.581149 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-config" (OuterVolumeSpecName: "config") pod "6def64d4-9e4d-4e24-9815-ea2de38309ff" (UID: "6def64d4-9e4d-4e24-9815-ea2de38309ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.602733 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfqrn\" (UniqueName: \"kubernetes.io/projected/6def64d4-9e4d-4e24-9815-ea2de38309ff-kube-api-access-xfqrn\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.602768 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.602795 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.602807 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.602819 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.602831 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6def64d4-9e4d-4e24-9815-ea2de38309ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.889974 4637 generic.go:334] "Generic (PLEG): container finished" podID="9e3ec383-3c69-411d-a2d6-2c3ba57f5259" containerID="d73a814f59a3f673228e5658d8591f9f28b25f10caeeae639bf04a861ef2091d" exitCode=0 Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.890527 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" event={"ID":"9e3ec383-3c69-411d-a2d6-2c3ba57f5259","Type":"ContainerDied","Data":"d73a814f59a3f673228e5658d8591f9f28b25f10caeeae639bf04a861ef2091d"} Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.941875 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.951856 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" event={"ID":"6def64d4-9e4d-4e24-9815-ea2de38309ff","Type":"ContainerDied","Data":"54778c4412519965620858db338ee5adbf9d13f215739a25186f3585570b1741"} Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.951924 4637 scope.go:117] "RemoveContainer" containerID="6885b7965aaba5759b5594ad0a22f3cf98a983fd7a96d54ecc2cdf9311763671" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.952156 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-wtsst" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.965594 4637 generic.go:334] "Generic (PLEG): container finished" podID="9c657096-ed3a-4b49-a646-4ebad0261998" containerID="ad46b8aa95ab5b0658152c889c4b65dfd93159108db3526530e4365c87f4e433" exitCode=0 Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.965780 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f547-account-create-brb84" event={"ID":"9c657096-ed3a-4b49-a646-4ebad0261998","Type":"ContainerDied","Data":"ad46b8aa95ab5b0658152c889c4b65dfd93159108db3526530e4365c87f4e433"} Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:35.965805 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f547-account-create-brb84" event={"ID":"9c657096-ed3a-4b49-a646-4ebad0261998","Type":"ContainerStarted","Data":"c901d9a9d077781168924748ce1088dc0aa1034d1bd81cccea77d0a65e853b31"} Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.009881 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6bfc57fb6f-w2dxg"] Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.014744 4637 generic.go:334] "Generic (PLEG): container finished" podID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" containerID="2fd5ab09855d1bf5d9a25bbbddf8692eee529e45aed52c929677508196da3b8d" exitCode=0 Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.014838 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" event={"ID":"2fbd0319-a7c4-4fa8-928c-50ba5dab6777","Type":"ContainerDied","Data":"2fd5ab09855d1bf5d9a25bbbddf8692eee529e45aed52c929677508196da3b8d"} Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.014874 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" event={"ID":"2fbd0319-a7c4-4fa8-928c-50ba5dab6777","Type":"ContainerStarted","Data":"17cad55f97b288f02de16bf189c665ecb42c07aeb2e5778577cdca723e150861"} Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.043209 4637 generic.go:334] "Generic (PLEG): container finished" podID="a2866484-72b3-4826-b45d-f015df568ee1" containerID="df23eeac91b96a30fe792ded19bd9c38db8f91aa6239bd353c0af0344d902ee9" exitCode=0 Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.043281 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd96-account-create-kx994" event={"ID":"a2866484-72b3-4826-b45d-f015df568ee1","Type":"ContainerDied","Data":"df23eeac91b96a30fe792ded19bd9c38db8f91aa6239bd353c0af0344d902ee9"} Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.081279 4637 generic.go:334] "Generic (PLEG): container finished" podID="61c23c76-9613-401c-aff5-5ad572188e85" containerID="30b7bb7e9cf3a9282f5b75e2064539cc3e396330a0d917c63335e38bf91663f9" exitCode=0 Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.082147 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a37f-account-create-cjk64" event={"ID":"61c23c76-9613-401c-aff5-5ad572188e85","Type":"ContainerDied","Data":"30b7bb7e9cf3a9282f5b75e2064539cc3e396330a0d917c63335e38bf91663f9"} Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.089474 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64664649c9-zh8fh"] Dec 01 15:03:36 crc kubenswrapper[4637]: E1201 15:03:36.089914 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6def64d4-9e4d-4e24-9815-ea2de38309ff" containerName="init" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.089942 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="6def64d4-9e4d-4e24-9815-ea2de38309ff" containerName="init" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.090127 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="6def64d4-9e4d-4e24-9815-ea2de38309ff" containerName="init" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.091164 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.118519 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64664649c9-zh8fh"] Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.171244 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.219459 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-wtsst"] Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.231386 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0db1960f-8ebf-4f82-a6d4-f780b553061a-horizon-secret-key\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.231518 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0db1960f-8ebf-4f82-a6d4-f780b553061a-logs\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.231557 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-scripts\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.231605 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht89r\" (UniqueName: \"kubernetes.io/projected/0db1960f-8ebf-4f82-a6d4-f780b553061a-kube-api-access-ht89r\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.231652 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-config-data\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.255703 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-wtsst"] Dec 01 15:03:36 crc kubenswrapper[4637]: W1201 15:03:36.319356 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9e23a67_fbfc_4f32_ab91_404522460d90.slice/crio-39e38976e421e49ca470e85f6a51f78cb692a0aac7fba57b8e2ad45d2f44246f WatchSource:0}: Error finding container 39e38976e421e49ca470e85f6a51f78cb692a0aac7fba57b8e2ad45d2f44246f: Status 404 returned error can't find the container with id 39e38976e421e49ca470e85f6a51f78cb692a0aac7fba57b8e2ad45d2f44246f Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.335156 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0db1960f-8ebf-4f82-a6d4-f780b553061a-horizon-secret-key\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.335242 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0db1960f-8ebf-4f82-a6d4-f780b553061a-logs\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.335283 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-scripts\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.335311 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht89r\" (UniqueName: \"kubernetes.io/projected/0db1960f-8ebf-4f82-a6d4-f780b553061a-kube-api-access-ht89r\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.335332 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-config-data\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.336631 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-config-data\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.336899 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0db1960f-8ebf-4f82-a6d4-f780b553061a-logs\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.337330 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-scripts\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.337687 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.362368 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0db1960f-8ebf-4f82-a6d4-f780b553061a-horizon-secret-key\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.366223 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht89r\" (UniqueName: \"kubernetes.io/projected/0db1960f-8ebf-4f82-a6d4-f780b553061a-kube-api-access-ht89r\") pod \"horizon-64664649c9-zh8fh\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:36 crc kubenswrapper[4637]: I1201 15:03:36.490870 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.001948 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.161598 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.236645 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" event={"ID":"9e3ec383-3c69-411d-a2d6-2c3ba57f5259","Type":"ContainerDied","Data":"1a6e38013136c4abbeb45c783be983df824cd7600a551af5950d46a24175fdae"} Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.236738 4637 scope.go:117] "RemoveContainer" containerID="d73a814f59a3f673228e5658d8591f9f28b25f10caeeae639bf04a861ef2091d" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.237046 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-cnc8z" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.256327 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9e23a67-fbfc-4f32-ab91-404522460d90","Type":"ContainerStarted","Data":"39e38976e421e49ca470e85f6a51f78cb692a0aac7fba57b8e2ad45d2f44246f"} Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.268712 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" event={"ID":"2fbd0319-a7c4-4fa8-928c-50ba5dab6777","Type":"ContainerStarted","Data":"0f52dcffa9a8750f7ef665b85d1d446f4bf30124683f5e0129448c3f6fb720c4"} Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.269018 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.286295 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-swift-storage-0\") pod \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.286419 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-sb\") pod \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.286476 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-svc\") pod \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.286503 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-nb\") pod \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.286662 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-config\") pod \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.286728 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llgsw\" (UniqueName: \"kubernetes.io/projected/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-kube-api-access-llgsw\") pod \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\" (UID: \"9e3ec383-3c69-411d-a2d6-2c3ba57f5259\") " Dec 01 15:03:37 crc kubenswrapper[4637]: E1201 15:03:37.296139 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice/crio-2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice\": RecentStats: unable to find data in memory cache]" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.331355 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-kube-api-access-llgsw" (OuterVolumeSpecName: "kube-api-access-llgsw") pod "9e3ec383-3c69-411d-a2d6-2c3ba57f5259" (UID: "9e3ec383-3c69-411d-a2d6-2c3ba57f5259"). InnerVolumeSpecName "kube-api-access-llgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.365881 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e3ec383-3c69-411d-a2d6-2c3ba57f5259" (UID: "9e3ec383-3c69-411d-a2d6-2c3ba57f5259"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.366350 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e3ec383-3c69-411d-a2d6-2c3ba57f5259" (UID: "9e3ec383-3c69-411d-a2d6-2c3ba57f5259"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.389877 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.389914 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.389946 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llgsw\" (UniqueName: \"kubernetes.io/projected/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-kube-api-access-llgsw\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.393856 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e3ec383-3c69-411d-a2d6-2c3ba57f5259" (UID: "9e3ec383-3c69-411d-a2d6-2c3ba57f5259"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.405406 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.418354 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" podStartSLOduration=4.418321633 podStartE2EDuration="4.418321633s" podCreationTimestamp="2025-12-01 15:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:37.318046645 +0000 UTC m=+1067.835755473" watchObservedRunningTime="2025-12-01 15:03:37.418321633 +0000 UTC m=+1067.936030481" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.421366 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e3ec383-3c69-411d-a2d6-2c3ba57f5259" (UID: "9e3ec383-3c69-411d-a2d6-2c3ba57f5259"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:37 crc kubenswrapper[4637]: W1201 15:03:37.465022 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0db1960f_8ebf_4f82_a6d4_f780b553061a.slice/crio-fb91dda9c10c208fb3f62ac54a716c3d09b493e6c058da315102dbc93dd16534 WatchSource:0}: Error finding container fb91dda9c10c208fb3f62ac54a716c3d09b493e6c058da315102dbc93dd16534: Status 404 returned error can't find the container with id fb91dda9c10c208fb3f62ac54a716c3d09b493e6c058da315102dbc93dd16534 Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.471105 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-config" (OuterVolumeSpecName: "config") pod "9e3ec383-3c69-411d-a2d6-2c3ba57f5259" (UID: "9e3ec383-3c69-411d-a2d6-2c3ba57f5259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.474326 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64664649c9-zh8fh"] Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.500219 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.500253 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.500268 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3ec383-3c69-411d-a2d6-2c3ba57f5259-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.648819 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-cnc8z"] Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.668558 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-cnc8z"] Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.783660 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6def64d4-9e4d-4e24-9815-ea2de38309ff" path="/var/lib/kubelet/pods/6def64d4-9e4d-4e24-9815-ea2de38309ff/volumes" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.785221 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e3ec383-3c69-411d-a2d6-2c3ba57f5259" path="/var/lib/kubelet/pods/9e3ec383-3c69-411d-a2d6-2c3ba57f5259/volumes" Dec 01 15:03:37 crc kubenswrapper[4637]: I1201 15:03:37.856745 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f547-account-create-brb84" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.014721 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzxvv\" (UniqueName: \"kubernetes.io/projected/9c657096-ed3a-4b49-a646-4ebad0261998-kube-api-access-lzxvv\") pod \"9c657096-ed3a-4b49-a646-4ebad0261998\" (UID: \"9c657096-ed3a-4b49-a646-4ebad0261998\") " Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.032074 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c657096-ed3a-4b49-a646-4ebad0261998-kube-api-access-lzxvv" (OuterVolumeSpecName: "kube-api-access-lzxvv") pod "9c657096-ed3a-4b49-a646-4ebad0261998" (UID: "9c657096-ed3a-4b49-a646-4ebad0261998"). InnerVolumeSpecName "kube-api-access-lzxvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.117333 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzxvv\" (UniqueName: \"kubernetes.io/projected/9c657096-ed3a-4b49-a646-4ebad0261998-kube-api-access-lzxvv\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.192173 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a37f-account-create-cjk64" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.203603 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd96-account-create-kx994" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.323262 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zcpg\" (UniqueName: \"kubernetes.io/projected/a2866484-72b3-4826-b45d-f015df568ee1-kube-api-access-9zcpg\") pod \"a2866484-72b3-4826-b45d-f015df568ee1\" (UID: \"a2866484-72b3-4826-b45d-f015df568ee1\") " Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.323428 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn555\" (UniqueName: \"kubernetes.io/projected/61c23c76-9613-401c-aff5-5ad572188e85-kube-api-access-pn555\") pod \"61c23c76-9613-401c-aff5-5ad572188e85\" (UID: \"61c23c76-9613-401c-aff5-5ad572188e85\") " Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.329270 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2866484-72b3-4826-b45d-f015df568ee1-kube-api-access-9zcpg" (OuterVolumeSpecName: "kube-api-access-9zcpg") pod "a2866484-72b3-4826-b45d-f015df568ee1" (UID: "a2866484-72b3-4826-b45d-f015df568ee1"). InnerVolumeSpecName "kube-api-access-9zcpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.331172 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c23c76-9613-401c-aff5-5ad572188e85-kube-api-access-pn555" (OuterVolumeSpecName: "kube-api-access-pn555") pod "61c23c76-9613-401c-aff5-5ad572188e85" (UID: "61c23c76-9613-401c-aff5-5ad572188e85"). InnerVolumeSpecName "kube-api-access-pn555". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.337955 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a37f-account-create-cjk64" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.337951 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a37f-account-create-cjk64" event={"ID":"61c23c76-9613-401c-aff5-5ad572188e85","Type":"ContainerDied","Data":"0360c6916e89cff0c4b1ff2beddaca7c2158a4f732f278ba692f4bb7cf892cfa"} Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.338139 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0360c6916e89cff0c4b1ff2beddaca7c2158a4f732f278ba692f4bb7cf892cfa" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.347974 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9e23a67-fbfc-4f32-ab91-404522460d90","Type":"ContainerStarted","Data":"05baf4e48431d86d1db24e1c31feda024361d5cb7bd53c067a91d2438d5e21b9"} Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.349969 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64664649c9-zh8fh" event={"ID":"0db1960f-8ebf-4f82-a6d4-f780b553061a","Type":"ContainerStarted","Data":"fb91dda9c10c208fb3f62ac54a716c3d09b493e6c058da315102dbc93dd16534"} Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.355107 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91fed8c1-7b36-4b01-8edb-35fe65411790","Type":"ContainerStarted","Data":"8f0f0c6e32ee8832e09f5bf0a18daddfc2c9cb56b78a7b9447f85edd16b51d66"} Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.358744 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f547-account-create-brb84" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.358762 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f547-account-create-brb84" event={"ID":"9c657096-ed3a-4b49-a646-4ebad0261998","Type":"ContainerDied","Data":"c901d9a9d077781168924748ce1088dc0aa1034d1bd81cccea77d0a65e853b31"} Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.358803 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c901d9a9d077781168924748ce1088dc0aa1034d1bd81cccea77d0a65e853b31" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.362058 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd96-account-create-kx994" event={"ID":"a2866484-72b3-4826-b45d-f015df568ee1","Type":"ContainerDied","Data":"0dbe0a6662fba2a0b2f0f41491bacbeba46ab93e62e0e6f170aa2c629ceaf0e7"} Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.362110 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dbe0a6662fba2a0b2f0f41491bacbeba46ab93e62e0e6f170aa2c629ceaf0e7" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.362072 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd96-account-create-kx994" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.450008 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn555\" (UniqueName: \"kubernetes.io/projected/61c23c76-9613-401c-aff5-5ad572188e85-kube-api-access-pn555\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:38 crc kubenswrapper[4637]: I1201 15:03:38.450041 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zcpg\" (UniqueName: \"kubernetes.io/projected/a2866484-72b3-4826-b45d-f015df568ee1-kube-api-access-9zcpg\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:39 crc kubenswrapper[4637]: I1201 15:03:39.383832 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9e23a67-fbfc-4f32-ab91-404522460d90","Type":"ContainerStarted","Data":"bb61d65a183af09f46193ff5def82d41335b0928adf9ce99fc5ec78541be8c3e"} Dec 01 15:03:39 crc kubenswrapper[4637]: I1201 15:03:39.384373 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerName="glance-log" containerID="cri-o://05baf4e48431d86d1db24e1c31feda024361d5cb7bd53c067a91d2438d5e21b9" gracePeriod=30 Dec 01 15:03:39 crc kubenswrapper[4637]: I1201 15:03:39.385023 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerName="glance-httpd" containerID="cri-o://bb61d65a183af09f46193ff5def82d41335b0928adf9ce99fc5ec78541be8c3e" gracePeriod=30 Dec 01 15:03:39 crc kubenswrapper[4637]: I1201 15:03:39.392341 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91fed8c1-7b36-4b01-8edb-35fe65411790","Type":"ContainerStarted","Data":"e2bc4a7abef980d55bdda9c4c0b11f1fe7fa07d6a79069008bc2f0891fbcc922"} Dec 01 15:03:39 crc kubenswrapper[4637]: I1201 15:03:39.427554 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.42753256 podStartE2EDuration="6.42753256s" podCreationTimestamp="2025-12-01 15:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:39.417835307 +0000 UTC m=+1069.935544135" watchObservedRunningTime="2025-12-01 15:03:39.42753256 +0000 UTC m=+1069.945241388" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.427860 4637 generic.go:334] "Generic (PLEG): container finished" podID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerID="bb61d65a183af09f46193ff5def82d41335b0928adf9ce99fc5ec78541be8c3e" exitCode=143 Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.427894 4637 generic.go:334] "Generic (PLEG): container finished" podID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerID="05baf4e48431d86d1db24e1c31feda024361d5cb7bd53c067a91d2438d5e21b9" exitCode=143 Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.427919 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9e23a67-fbfc-4f32-ab91-404522460d90","Type":"ContainerDied","Data":"bb61d65a183af09f46193ff5def82d41335b0928adf9ce99fc5ec78541be8c3e"} Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.427992 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9e23a67-fbfc-4f32-ab91-404522460d90","Type":"ContainerDied","Data":"05baf4e48431d86d1db24e1c31feda024361d5cb7bd53c067a91d2438d5e21b9"} Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.428005 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9e23a67-fbfc-4f32-ab91-404522460d90","Type":"ContainerDied","Data":"39e38976e421e49ca470e85f6a51f78cb692a0aac7fba57b8e2ad45d2f44246f"} Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.428016 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e38976e421e49ca470e85f6a51f78cb692a0aac7fba57b8e2ad45d2f44246f" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.485906 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.627305 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-combined-ca-bundle\") pod \"c9e23a67-fbfc-4f32-ab91-404522460d90\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.627362 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-config-data\") pod \"c9e23a67-fbfc-4f32-ab91-404522460d90\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.627469 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-logs\") pod \"c9e23a67-fbfc-4f32-ab91-404522460d90\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.627538 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-httpd-run\") pod \"c9e23a67-fbfc-4f32-ab91-404522460d90\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.627626 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk28c\" (UniqueName: \"kubernetes.io/projected/c9e23a67-fbfc-4f32-ab91-404522460d90-kube-api-access-dk28c\") pod \"c9e23a67-fbfc-4f32-ab91-404522460d90\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.627648 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-scripts\") pod \"c9e23a67-fbfc-4f32-ab91-404522460d90\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.627691 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c9e23a67-fbfc-4f32-ab91-404522460d90\" (UID: \"c9e23a67-fbfc-4f32-ab91-404522460d90\") " Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.628910 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c9e23a67-fbfc-4f32-ab91-404522460d90" (UID: "c9e23a67-fbfc-4f32-ab91-404522460d90"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.629242 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-logs" (OuterVolumeSpecName: "logs") pod "c9e23a67-fbfc-4f32-ab91-404522460d90" (UID: "c9e23a67-fbfc-4f32-ab91-404522460d90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.646640 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "c9e23a67-fbfc-4f32-ab91-404522460d90" (UID: "c9e23a67-fbfc-4f32-ab91-404522460d90"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.651023 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e23a67-fbfc-4f32-ab91-404522460d90-kube-api-access-dk28c" (OuterVolumeSpecName: "kube-api-access-dk28c") pod "c9e23a67-fbfc-4f32-ab91-404522460d90" (UID: "c9e23a67-fbfc-4f32-ab91-404522460d90"). InnerVolumeSpecName "kube-api-access-dk28c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.659822 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-scripts" (OuterVolumeSpecName: "scripts") pod "c9e23a67-fbfc-4f32-ab91-404522460d90" (UID: "c9e23a67-fbfc-4f32-ab91-404522460d90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.677612 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9e23a67-fbfc-4f32-ab91-404522460d90" (UID: "c9e23a67-fbfc-4f32-ab91-404522460d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.685330 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-config-data" (OuterVolumeSpecName: "config-data") pod "c9e23a67-fbfc-4f32-ab91-404522460d90" (UID: "c9e23a67-fbfc-4f32-ab91-404522460d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.729973 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk28c\" (UniqueName: \"kubernetes.io/projected/c9e23a67-fbfc-4f32-ab91-404522460d90-kube-api-access-dk28c\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.730000 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.730042 4637 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.730052 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.730061 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e23a67-fbfc-4f32-ab91-404522460d90-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.730071 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.730079 4637 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9e23a67-fbfc-4f32-ab91-404522460d90-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.751983 4637 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 01 15:03:40 crc kubenswrapper[4637]: I1201 15:03:40.844337 4637 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.448398 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.448405 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91fed8c1-7b36-4b01-8edb-35fe65411790","Type":"ContainerStarted","Data":"800c2215b0bc96fbb47389b9bc46649e3a0d6182fcfeccfd34b65e67eec1a2f3"} Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.448756 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerName="glance-log" containerID="cri-o://e2bc4a7abef980d55bdda9c4c0b11f1fe7fa07d6a79069008bc2f0891fbcc922" gracePeriod=30 Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.448970 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerName="glance-httpd" containerID="cri-o://800c2215b0bc96fbb47389b9bc46649e3a0d6182fcfeccfd34b65e67eec1a2f3" gracePeriod=30 Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.488142 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.48812419 podStartE2EDuration="8.48812419s" podCreationTimestamp="2025-12-01 15:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:41.485285533 +0000 UTC m=+1072.002994361" watchObservedRunningTime="2025-12-01 15:03:41.48812419 +0000 UTC m=+1072.005833018" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.519004 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.531488 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.565496 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:41 crc kubenswrapper[4637]: E1201 15:03:41.565946 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2866484-72b3-4826-b45d-f015df568ee1" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.565960 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2866484-72b3-4826-b45d-f015df568ee1" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: E1201 15:03:41.565974 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c657096-ed3a-4b49-a646-4ebad0261998" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.565980 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c657096-ed3a-4b49-a646-4ebad0261998" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: E1201 15:03:41.566007 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c23c76-9613-401c-aff5-5ad572188e85" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566015 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c23c76-9613-401c-aff5-5ad572188e85" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: E1201 15:03:41.566026 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3ec383-3c69-411d-a2d6-2c3ba57f5259" containerName="init" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566032 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3ec383-3c69-411d-a2d6-2c3ba57f5259" containerName="init" Dec 01 15:03:41 crc kubenswrapper[4637]: E1201 15:03:41.566056 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerName="glance-httpd" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566061 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerName="glance-httpd" Dec 01 15:03:41 crc kubenswrapper[4637]: E1201 15:03:41.566071 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerName="glance-log" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566077 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerName="glance-log" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566266 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c23c76-9613-401c-aff5-5ad572188e85" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566287 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerName="glance-httpd" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566296 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3ec383-3c69-411d-a2d6-2c3ba57f5259" containerName="init" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566308 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" containerName="glance-log" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566321 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c657096-ed3a-4b49-a646-4ebad0261998" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.566328 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2866484-72b3-4826-b45d-f015df568ee1" containerName="mariadb-account-create" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.567307 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.575577 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.576737 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.665493 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.665537 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-logs\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.665612 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsxf\" (UniqueName: \"kubernetes.io/projected/2e298d96-fc57-44a4-9085-f1c89c1be872-kube-api-access-jbsxf\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.665647 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.665684 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.665718 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.665754 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.767753 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.767807 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-logs\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.767876 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbsxf\" (UniqueName: \"kubernetes.io/projected/2e298d96-fc57-44a4-9085-f1c89c1be872-kube-api-access-jbsxf\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.767909 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.767952 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.767978 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.768008 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.768374 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.768496 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-logs\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.771857 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.782887 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.788268 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.791815 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbsxf\" (UniqueName: \"kubernetes.io/projected/2e298d96-fc57-44a4-9085-f1c89c1be872-kube-api-access-jbsxf\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.813025 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.822711 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " pod="openstack/glance-default-external-api-0" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.825565 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e23a67-fbfc-4f32-ab91-404522460d90" path="/var/lib/kubelet/pods/c9e23a67-fbfc-4f32-ab91-404522460d90/volumes" Dec 01 15:03:41 crc kubenswrapper[4637]: I1201 15:03:41.888212 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.464606 4637 generic.go:334] "Generic (PLEG): container finished" podID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerID="800c2215b0bc96fbb47389b9bc46649e3a0d6182fcfeccfd34b65e67eec1a2f3" exitCode=0 Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.464639 4637 generic.go:334] "Generic (PLEG): container finished" podID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerID="e2bc4a7abef980d55bdda9c4c0b11f1fe7fa07d6a79069008bc2f0891fbcc922" exitCode=143 Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.464680 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91fed8c1-7b36-4b01-8edb-35fe65411790","Type":"ContainerDied","Data":"800c2215b0bc96fbb47389b9bc46649e3a0d6182fcfeccfd34b65e67eec1a2f3"} Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.464708 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91fed8c1-7b36-4b01-8edb-35fe65411790","Type":"ContainerDied","Data":"e2bc4a7abef980d55bdda9c4c0b11f1fe7fa07d6a79069008bc2f0891fbcc922"} Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.467716 4637 generic.go:334] "Generic (PLEG): container finished" podID="a9d588ec-741e-4df6-94e1-a824c312d598" containerID="15dc05e4253bfaee5d3e148c008f60d4dfa934c274ce587be883c00844f3a7d8" exitCode=0 Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.467744 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j85h5" event={"ID":"a9d588ec-741e-4df6-94e1-a824c312d598","Type":"ContainerDied","Data":"15dc05e4253bfaee5d3e148c008f60d4dfa934c274ce587be883c00844f3a7d8"} Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.976074 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vlwgj"] Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.977568 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.980015 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m5gn9" Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.980022 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.989478 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 01 15:03:42 crc kubenswrapper[4637]: I1201 15:03:42.997761 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vlwgj"] Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.099045 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-db-sync-config-data\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.099214 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-scripts\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.099429 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-etc-machine-id\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.099547 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-config-data\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.099618 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-combined-ca-bundle\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.099696 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc85c\" (UniqueName: \"kubernetes.io/projected/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-kube-api-access-tc85c\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.204306 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-db-sync-config-data\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.204398 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-scripts\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.204471 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-etc-machine-id\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.204512 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-config-data\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.204545 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-combined-ca-bundle\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.204601 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc85c\" (UniqueName: \"kubernetes.io/projected/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-kube-api-access-tc85c\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.205546 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-etc-machine-id\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.210112 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-db-sync-config-data\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.213869 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-scripts\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.227069 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-combined-ca-bundle\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.238480 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc85c\" (UniqueName: \"kubernetes.io/projected/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-kube-api-access-tc85c\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.239163 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-config-data\") pod \"cinder-db-sync-vlwgj\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.274963 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zc7pv"] Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.277474 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.291857 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.292103 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7q9nd" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.354891 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.358619 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zc7pv"] Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.394916 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-x4gwb"] Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.398494 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.403727 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.403878 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rptj8" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.404183 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.417184 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-db-sync-config-data\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.419213 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-combined-ca-bundle\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.419270 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcbkm\" (UniqueName: \"kubernetes.io/projected/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-kube-api-access-rcbkm\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.434127 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-x4gwb"] Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.521097 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-combined-ca-bundle\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.521208 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-combined-ca-bundle\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.521237 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcbkm\" (UniqueName: \"kubernetes.io/projected/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-kube-api-access-rcbkm\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.521285 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-config\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.521310 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-db-sync-config-data\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.521347 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfw9q\" (UniqueName: \"kubernetes.io/projected/31a1344d-109c-400f-ac50-60be5fed1255-kube-api-access-xfw9q\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.531653 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-combined-ca-bundle\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.533432 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-db-sync-config-data\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.553824 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcbkm\" (UniqueName: \"kubernetes.io/projected/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-kube-api-access-rcbkm\") pod \"barbican-db-sync-zc7pv\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.626038 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-combined-ca-bundle\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.626125 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-config\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.626157 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfw9q\" (UniqueName: \"kubernetes.io/projected/31a1344d-109c-400f-ac50-60be5fed1255-kube-api-access-xfw9q\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.635131 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-config\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.637029 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-combined-ca-bundle\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.661526 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfw9q\" (UniqueName: \"kubernetes.io/projected/31a1344d-109c-400f-ac50-60be5fed1255-kube-api-access-xfw9q\") pod \"neutron-db-sync-x4gwb\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.666970 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.730574 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:03:43 crc kubenswrapper[4637]: I1201 15:03:43.927955 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:03:44 crc kubenswrapper[4637]: I1201 15:03:43.999997 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-nvk98"] Dec 01 15:03:44 crc kubenswrapper[4637]: I1201 15:03:44.000893 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" containerName="dnsmasq-dns" containerID="cri-o://99f908c1d1d89787eee196da3bdfef4cfa942a87498a5ebdeb2f7e271942e027" gracePeriod=10 Dec 01 15:03:44 crc kubenswrapper[4637]: I1201 15:03:44.488228 4637 generic.go:334] "Generic (PLEG): container finished" podID="912d084a-f1c0-4389-a7ca-59acd0238493" containerID="99f908c1d1d89787eee196da3bdfef4cfa942a87498a5ebdeb2f7e271942e027" exitCode=0 Dec 01 15:03:44 crc kubenswrapper[4637]: I1201 15:03:44.488284 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" event={"ID":"912d084a-f1c0-4389-a7ca-59acd0238493","Type":"ContainerDied","Data":"99f908c1d1d89787eee196da3bdfef4cfa942a87498a5ebdeb2f7e271942e027"} Dec 01 15:03:45 crc kubenswrapper[4637]: I1201 15:03:45.681549 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.122472 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f65455bb9-dgzkv"] Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.162171 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64585bdddb-h9hvw"] Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.163985 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.166787 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.186410 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64585bdddb-h9hvw"] Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.190274 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-tls-certs\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.190437 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-scripts\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.190510 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e960b7-8574-4c38-bb22-67f5a77aaca6-logs\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.192534 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-combined-ca-bundle\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.192713 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-config-data\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.192872 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-secret-key\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.193020 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkkpl\" (UniqueName: \"kubernetes.io/projected/29e960b7-8574-4c38-bb22-67f5a77aaca6-kube-api-access-qkkpl\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.293486 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64664649c9-zh8fh"] Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.294417 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-combined-ca-bundle\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.294462 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-config-data\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.294510 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-secret-key\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.294530 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkkpl\" (UniqueName: \"kubernetes.io/projected/29e960b7-8574-4c38-bb22-67f5a77aaca6-kube-api-access-qkkpl\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.294553 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-tls-certs\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.294580 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-scripts\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.294597 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e960b7-8574-4c38-bb22-67f5a77aaca6-logs\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.295741 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e960b7-8574-4c38-bb22-67f5a77aaca6-logs\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.296098 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-scripts\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.297142 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-config-data\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.306782 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-tls-certs\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.320948 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-combined-ca-bundle\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.321322 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-secret-key\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.328920 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkkpl\" (UniqueName: \"kubernetes.io/projected/29e960b7-8574-4c38-bb22-67f5a77aaca6-kube-api-access-qkkpl\") pod \"horizon-64585bdddb-h9hvw\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.339895 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-fcb665488-kvv69"] Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.342525 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.372151 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fcb665488-kvv69"] Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.493704 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.498312 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/269bc165-8fbc-4c63-84ef-96b74d44fc16-logs\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.498353 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/269bc165-8fbc-4c63-84ef-96b74d44fc16-config-data\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.498417 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/269bc165-8fbc-4c63-84ef-96b74d44fc16-scripts\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.498460 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-combined-ca-bundle\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.498564 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-horizon-secret-key\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.498687 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmlm6\" (UniqueName: \"kubernetes.io/projected/269bc165-8fbc-4c63-84ef-96b74d44fc16-kube-api-access-xmlm6\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.499010 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-horizon-tls-certs\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.602884 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/269bc165-8fbc-4c63-84ef-96b74d44fc16-config-data\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.603010 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/269bc165-8fbc-4c63-84ef-96b74d44fc16-scripts\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.603069 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-combined-ca-bundle\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.603111 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-horizon-secret-key\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.603144 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmlm6\" (UniqueName: \"kubernetes.io/projected/269bc165-8fbc-4c63-84ef-96b74d44fc16-kube-api-access-xmlm6\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.603954 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/269bc165-8fbc-4c63-84ef-96b74d44fc16-scripts\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.604136 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-horizon-tls-certs\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.604157 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/269bc165-8fbc-4c63-84ef-96b74d44fc16-config-data\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.604236 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/269bc165-8fbc-4c63-84ef-96b74d44fc16-logs\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.604502 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/269bc165-8fbc-4c63-84ef-96b74d44fc16-logs\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.608738 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-horizon-secret-key\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.614821 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-horizon-tls-certs\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.616851 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269bc165-8fbc-4c63-84ef-96b74d44fc16-combined-ca-bundle\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.619185 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmlm6\" (UniqueName: \"kubernetes.io/projected/269bc165-8fbc-4c63-84ef-96b74d44fc16-kube-api-access-xmlm6\") pod \"horizon-fcb665488-kvv69\" (UID: \"269bc165-8fbc-4c63-84ef-96b74d44fc16\") " pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:46 crc kubenswrapper[4637]: I1201 15:03:46.720158 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:03:47 crc kubenswrapper[4637]: I1201 15:03:47.082975 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Dec 01 15:03:47 crc kubenswrapper[4637]: E1201 15:03:47.581838 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f33978_c393_4faf_99c0_b6ce509f1d3f.slice/crio-2ef67e4525c9b73e6616612d6ccc7159ce8d82799ebfa8ba7b4ec6d608d35b10\": RecentStats: unable to find data in memory cache]" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.255266 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.342189 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-credential-keys\") pod \"a9d588ec-741e-4df6-94e1-a824c312d598\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.342237 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-scripts\") pod \"a9d588ec-741e-4df6-94e1-a824c312d598\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.342272 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-fernet-keys\") pod \"a9d588ec-741e-4df6-94e1-a824c312d598\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.342364 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-config-data\") pod \"a9d588ec-741e-4df6-94e1-a824c312d598\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.342387 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-combined-ca-bundle\") pod \"a9d588ec-741e-4df6-94e1-a824c312d598\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.342548 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvlx5\" (UniqueName: \"kubernetes.io/projected/a9d588ec-741e-4df6-94e1-a824c312d598-kube-api-access-pvlx5\") pod \"a9d588ec-741e-4df6-94e1-a824c312d598\" (UID: \"a9d588ec-741e-4df6-94e1-a824c312d598\") " Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.348145 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d588ec-741e-4df6-94e1-a824c312d598-kube-api-access-pvlx5" (OuterVolumeSpecName: "kube-api-access-pvlx5") pod "a9d588ec-741e-4df6-94e1-a824c312d598" (UID: "a9d588ec-741e-4df6-94e1-a824c312d598"). InnerVolumeSpecName "kube-api-access-pvlx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.350429 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-scripts" (OuterVolumeSpecName: "scripts") pod "a9d588ec-741e-4df6-94e1-a824c312d598" (UID: "a9d588ec-741e-4df6-94e1-a824c312d598"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.353878 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a9d588ec-741e-4df6-94e1-a824c312d598" (UID: "a9d588ec-741e-4df6-94e1-a824c312d598"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.367763 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a9d588ec-741e-4df6-94e1-a824c312d598" (UID: "a9d588ec-741e-4df6-94e1-a824c312d598"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.375480 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-config-data" (OuterVolumeSpecName: "config-data") pod "a9d588ec-741e-4df6-94e1-a824c312d598" (UID: "a9d588ec-741e-4df6-94e1-a824c312d598"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.378061 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9d588ec-741e-4df6-94e1-a824c312d598" (UID: "a9d588ec-741e-4df6-94e1-a824c312d598"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.446389 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvlx5\" (UniqueName: \"kubernetes.io/projected/a9d588ec-741e-4df6-94e1-a824c312d598-kube-api-access-pvlx5\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.446433 4637 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.446447 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.446458 4637 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.446470 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.446482 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d588ec-741e-4df6-94e1-a824c312d598-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.524781 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j85h5" event={"ID":"a9d588ec-741e-4df6-94e1-a824c312d598","Type":"ContainerDied","Data":"2eb056e786119521940ef60ae03838c596a50ae427e35f2cb7241ec214116d38"} Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.525224 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb056e786119521940ef60ae03838c596a50ae427e35f2cb7241ec214116d38" Dec 01 15:03:48 crc kubenswrapper[4637]: I1201 15:03:48.525347 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j85h5" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.354262 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j85h5"] Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.361692 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j85h5"] Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.442423 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5lxgt"] Dec 01 15:03:49 crc kubenswrapper[4637]: E1201 15:03:49.442957 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d588ec-741e-4df6-94e1-a824c312d598" containerName="keystone-bootstrap" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.442985 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d588ec-741e-4df6-94e1-a824c312d598" containerName="keystone-bootstrap" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.443212 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d588ec-741e-4df6-94e1-a824c312d598" containerName="keystone-bootstrap" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.443985 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.450478 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.450682 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jfpqw" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.450702 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.451403 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.468829 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-combined-ca-bundle\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.469345 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-config-data\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.469385 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsf9z\" (UniqueName: \"kubernetes.io/projected/cb48315c-5146-4f1e-9d0f-e39186e54083-kube-api-access-jsf9z\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.469410 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-fernet-keys\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.469469 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-scripts\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.469568 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-credential-keys\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.469165 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5lxgt"] Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.571311 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-credential-keys\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.571728 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-combined-ca-bundle\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.572045 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-config-data\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.572120 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsf9z\" (UniqueName: \"kubernetes.io/projected/cb48315c-5146-4f1e-9d0f-e39186e54083-kube-api-access-jsf9z\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.572140 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-fernet-keys\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.572192 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-scripts\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.577368 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-scripts\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.579485 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-config-data\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.579485 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-fernet-keys\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.589456 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-credential-keys\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.590583 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-combined-ca-bundle\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.592850 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsf9z\" (UniqueName: \"kubernetes.io/projected/cb48315c-5146-4f1e-9d0f-e39186e54083-kube-api-access-jsf9z\") pod \"keystone-bootstrap-5lxgt\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.770354 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jfpqw" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.778015 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:03:49 crc kubenswrapper[4637]: I1201 15:03:49.785323 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d588ec-741e-4df6-94e1-a824c312d598" path="/var/lib/kubelet/pods/a9d588ec-741e-4df6-94e1-a824c312d598/volumes" Dec 01 15:03:49 crc kubenswrapper[4637]: E1201 15:03:49.817426 4637 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3fc0a90b0dd7997947382c85ae8bad5574a86e5a47cd52918555522bd44d6a58/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3fc0a90b0dd7997947382c85ae8bad5574a86e5a47cd52918555522bd44d6a58/diff: no such file or directory, extraDiskErr: Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.297464 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.298436 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h648h548h58fh59bh577h5f6h5ffh5fch655hf8h54ch596h7fh65h5c8h5b9h54h99h676h8ch4h9fh595h67bhb7h67bh659h585h558h6fh5cdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25sxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f65455bb9-dgzkv_openstack(9689152e-e8b0-40ab-a2e6-d0441160c13a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.304504 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f65455bb9-dgzkv" podUID="9689152e-e8b0-40ab-a2e6-d0441160c13a" Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.340245 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.340529 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n78h64ch87h56fh7fhdh5f5h68dhbdh5dch5b5h565h677hd4h66ch76h584hfhcfhfhbbh6h689h5b4h5f5h66dh545h545h9fh65ch5c9h65q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82zp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6bfc57fb6f-w2dxg_openstack(09cbc5ac-7259-494d-8c1c-5d25eac1161c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.352268 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6bfc57fb6f-w2dxg" podUID="09cbc5ac-7259-494d-8c1c-5d25eac1161c" Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.356346 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.356558 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n678h67bh695h657hf6h58dh85h54ch54chcdhd8h65h5fdh699h54dh5h65dh5d8h594h66ch598h668hc8h9h68bhc5h5fchc6h59h656h59fh567q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ht89r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-64664649c9-zh8fh_openstack(0db1960f-8ebf-4f82-a6d4-f780b553061a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:03:54 crc kubenswrapper[4637]: E1201 15:03:54.369210 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-64664649c9-zh8fh" podUID="0db1960f-8ebf-4f82-a6d4-f780b553061a" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.613655 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.624288 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" event={"ID":"912d084a-f1c0-4389-a7ca-59acd0238493","Type":"ContainerDied","Data":"cb7e5c47c4e1d74bfbcfa1d67118e85441e303d6c2f9c26a956d17547cc102a0"} Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.624630 4637 scope.go:117] "RemoveContainer" containerID="99f908c1d1d89787eee196da3bdfef4cfa942a87498a5ebdeb2f7e271942e027" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.738763 4637 scope.go:117] "RemoveContainer" containerID="f40e86093f45636caa05ca302334f0fb75851c11110d7803f243321d049281b4" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.765117 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-nb\") pod \"912d084a-f1c0-4389-a7ca-59acd0238493\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.765286 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-sb\") pod \"912d084a-f1c0-4389-a7ca-59acd0238493\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.765355 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-config\") pod \"912d084a-f1c0-4389-a7ca-59acd0238493\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.765377 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9lh6\" (UniqueName: \"kubernetes.io/projected/912d084a-f1c0-4389-a7ca-59acd0238493-kube-api-access-b9lh6\") pod \"912d084a-f1c0-4389-a7ca-59acd0238493\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.765420 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-swift-storage-0\") pod \"912d084a-f1c0-4389-a7ca-59acd0238493\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.765456 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-svc\") pod \"912d084a-f1c0-4389-a7ca-59acd0238493\" (UID: \"912d084a-f1c0-4389-a7ca-59acd0238493\") " Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.839562 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/912d084a-f1c0-4389-a7ca-59acd0238493-kube-api-access-b9lh6" (OuterVolumeSpecName: "kube-api-access-b9lh6") pod "912d084a-f1c0-4389-a7ca-59acd0238493" (UID: "912d084a-f1c0-4389-a7ca-59acd0238493"). InnerVolumeSpecName "kube-api-access-b9lh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.867444 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9lh6\" (UniqueName: \"kubernetes.io/projected/912d084a-f1c0-4389-a7ca-59acd0238493-kube-api-access-b9lh6\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.913363 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "912d084a-f1c0-4389-a7ca-59acd0238493" (UID: "912d084a-f1c0-4389-a7ca-59acd0238493"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.919492 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "912d084a-f1c0-4389-a7ca-59acd0238493" (UID: "912d084a-f1c0-4389-a7ca-59acd0238493"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.922231 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "912d084a-f1c0-4389-a7ca-59acd0238493" (UID: "912d084a-f1c0-4389-a7ca-59acd0238493"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.933350 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "912d084a-f1c0-4389-a7ca-59acd0238493" (UID: "912d084a-f1c0-4389-a7ca-59acd0238493"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.937895 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-config" (OuterVolumeSpecName: "config") pod "912d084a-f1c0-4389-a7ca-59acd0238493" (UID: "912d084a-f1c0-4389-a7ca-59acd0238493"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.969701 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.969770 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.969786 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.969801 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:54 crc kubenswrapper[4637]: I1201 15:03:54.969814 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/912d084a-f1c0-4389-a7ca-59acd0238493-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.066172 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.172751 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-httpd-run\") pod \"91fed8c1-7b36-4b01-8edb-35fe65411790\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.173521 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-scripts\") pod \"91fed8c1-7b36-4b01-8edb-35fe65411790\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.173738 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-config-data\") pod \"91fed8c1-7b36-4b01-8edb-35fe65411790\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.173846 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"91fed8c1-7b36-4b01-8edb-35fe65411790\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.174084 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-logs\") pod \"91fed8c1-7b36-4b01-8edb-35fe65411790\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.174212 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj49l\" (UniqueName: \"kubernetes.io/projected/91fed8c1-7b36-4b01-8edb-35fe65411790-kube-api-access-bj49l\") pod \"91fed8c1-7b36-4b01-8edb-35fe65411790\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.174437 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-combined-ca-bundle\") pod \"91fed8c1-7b36-4b01-8edb-35fe65411790\" (UID: \"91fed8c1-7b36-4b01-8edb-35fe65411790\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.173576 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "91fed8c1-7b36-4b01-8edb-35fe65411790" (UID: "91fed8c1-7b36-4b01-8edb-35fe65411790"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.180395 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-scripts" (OuterVolumeSpecName: "scripts") pod "91fed8c1-7b36-4b01-8edb-35fe65411790" (UID: "91fed8c1-7b36-4b01-8edb-35fe65411790"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.183849 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-logs" (OuterVolumeSpecName: "logs") pod "91fed8c1-7b36-4b01-8edb-35fe65411790" (UID: "91fed8c1-7b36-4b01-8edb-35fe65411790"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.187723 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "91fed8c1-7b36-4b01-8edb-35fe65411790" (UID: "91fed8c1-7b36-4b01-8edb-35fe65411790"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.189831 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fed8c1-7b36-4b01-8edb-35fe65411790-kube-api-access-bj49l" (OuterVolumeSpecName: "kube-api-access-bj49l") pod "91fed8c1-7b36-4b01-8edb-35fe65411790" (UID: "91fed8c1-7b36-4b01-8edb-35fe65411790"). InnerVolumeSpecName "kube-api-access-bj49l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.209674 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91fed8c1-7b36-4b01-8edb-35fe65411790" (UID: "91fed8c1-7b36-4b01-8edb-35fe65411790"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.229232 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-config-data" (OuterVolumeSpecName: "config-data") pod "91fed8c1-7b36-4b01-8edb-35fe65411790" (UID: "91fed8c1-7b36-4b01-8edb-35fe65411790"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.276975 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.277015 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj49l\" (UniqueName: \"kubernetes.io/projected/91fed8c1-7b36-4b01-8edb-35fe65411790-kube-api-access-bj49l\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.284323 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.284357 4637 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91fed8c1-7b36-4b01-8edb-35fe65411790-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.284369 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.284379 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fed8c1-7b36-4b01-8edb-35fe65411790-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.284417 4637 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.311579 4637 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.385832 4637 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.561841 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.581036 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.596425 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.657357 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerStarted","Data":"ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72"} Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.658558 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f65455bb9-dgzkv" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.659581 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f65455bb9-dgzkv" event={"ID":"9689152e-e8b0-40ab-a2e6-d0441160c13a","Type":"ContainerDied","Data":"7a9fd9c915fc4599000aeef05e64606ad675b7de0229b4c8b7f3df5ca71cc32a"} Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.661333 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sb6hr" event={"ID":"ecec3227-52bd-4b05-83ac-90218117a222","Type":"ContainerStarted","Data":"d209f4062a14a78cd304c59f8d4191465a3bdbaef6e2e661fe6a33d7c9226ea8"} Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.662335 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64664649c9-zh8fh" event={"ID":"0db1960f-8ebf-4f82-a6d4-f780b553061a","Type":"ContainerDied","Data":"fb91dda9c10c208fb3f62ac54a716c3d09b493e6c058da315102dbc93dd16534"} Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.662390 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64664649c9-zh8fh" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.684872 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91fed8c1-7b36-4b01-8edb-35fe65411790","Type":"ContainerDied","Data":"8f0f0c6e32ee8832e09f5bf0a18daddfc2c9cb56b78a7b9447f85edd16b51d66"} Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.685345 4637 scope.go:117] "RemoveContainer" containerID="800c2215b0bc96fbb47389b9bc46649e3a0d6182fcfeccfd34b65e67eec1a2f3" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.685955 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697568 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-config-data\") pod \"0db1960f-8ebf-4f82-a6d4-f780b553061a\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697635 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09cbc5ac-7259-494d-8c1c-5d25eac1161c-logs\") pod \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697674 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-scripts\") pod \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697698 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-scripts\") pod \"9689152e-e8b0-40ab-a2e6-d0441160c13a\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697726 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25sxn\" (UniqueName: \"kubernetes.io/projected/9689152e-e8b0-40ab-a2e6-d0441160c13a-kube-api-access-25sxn\") pod \"9689152e-e8b0-40ab-a2e6-d0441160c13a\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697748 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9689152e-e8b0-40ab-a2e6-d0441160c13a-logs\") pod \"9689152e-e8b0-40ab-a2e6-d0441160c13a\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697838 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht89r\" (UniqueName: \"kubernetes.io/projected/0db1960f-8ebf-4f82-a6d4-f780b553061a-kube-api-access-ht89r\") pod \"0db1960f-8ebf-4f82-a6d4-f780b553061a\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697879 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9689152e-e8b0-40ab-a2e6-d0441160c13a-horizon-secret-key\") pod \"9689152e-e8b0-40ab-a2e6-d0441160c13a\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697917 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82zp9\" (UniqueName: \"kubernetes.io/projected/09cbc5ac-7259-494d-8c1c-5d25eac1161c-kube-api-access-82zp9\") pod \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697952 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/09cbc5ac-7259-494d-8c1c-5d25eac1161c-horizon-secret-key\") pod \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.697973 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0db1960f-8ebf-4f82-a6d4-f780b553061a-logs\") pod \"0db1960f-8ebf-4f82-a6d4-f780b553061a\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.698018 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-config-data\") pod \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\" (UID: \"09cbc5ac-7259-494d-8c1c-5d25eac1161c\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.698057 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0db1960f-8ebf-4f82-a6d4-f780b553061a-horizon-secret-key\") pod \"0db1960f-8ebf-4f82-a6d4-f780b553061a\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.698085 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-scripts\") pod \"0db1960f-8ebf-4f82-a6d4-f780b553061a\" (UID: \"0db1960f-8ebf-4f82-a6d4-f780b553061a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.698108 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-config-data\") pod \"9689152e-e8b0-40ab-a2e6-d0441160c13a\" (UID: \"9689152e-e8b0-40ab-a2e6-d0441160c13a\") " Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.702308 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-config-data" (OuterVolumeSpecName: "config-data") pod "9689152e-e8b0-40ab-a2e6-d0441160c13a" (UID: "9689152e-e8b0-40ab-a2e6-d0441160c13a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.703733 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-config-data" (OuterVolumeSpecName: "config-data") pod "0db1960f-8ebf-4f82-a6d4-f780b553061a" (UID: "0db1960f-8ebf-4f82-a6d4-f780b553061a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.704156 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09cbc5ac-7259-494d-8c1c-5d25eac1161c-logs" (OuterVolumeSpecName: "logs") pod "09cbc5ac-7259-494d-8c1c-5d25eac1161c" (UID: "09cbc5ac-7259-494d-8c1c-5d25eac1161c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.704449 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-scripts" (OuterVolumeSpecName: "scripts") pod "09cbc5ac-7259-494d-8c1c-5d25eac1161c" (UID: "09cbc5ac-7259-494d-8c1c-5d25eac1161c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.704758 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-scripts" (OuterVolumeSpecName: "scripts") pod "9689152e-e8b0-40ab-a2e6-d0441160c13a" (UID: "9689152e-e8b0-40ab-a2e6-d0441160c13a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.706126 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bfc57fb6f-w2dxg" event={"ID":"09cbc5ac-7259-494d-8c1c-5d25eac1161c","Type":"ContainerDied","Data":"8caf538f4c83e3ae19dc58ffaa4ac7dada1bb54e5d7d8ea8cac7a60103c98b6c"} Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.706257 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.707065 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db1960f-8ebf-4f82-a6d4-f780b553061a-logs" (OuterVolumeSpecName: "logs") pod "0db1960f-8ebf-4f82-a6d4-f780b553061a" (UID: "0db1960f-8ebf-4f82-a6d4-f780b553061a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.707218 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-scripts" (OuterVolumeSpecName: "scripts") pod "0db1960f-8ebf-4f82-a6d4-f780b553061a" (UID: "0db1960f-8ebf-4f82-a6d4-f780b553061a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.708241 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9689152e-e8b0-40ab-a2e6-d0441160c13a-logs" (OuterVolumeSpecName: "logs") pod "9689152e-e8b0-40ab-a2e6-d0441160c13a" (UID: "9689152e-e8b0-40ab-a2e6-d0441160c13a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.709445 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-config-data" (OuterVolumeSpecName: "config-data") pod "09cbc5ac-7259-494d-8c1c-5d25eac1161c" (UID: "09cbc5ac-7259-494d-8c1c-5d25eac1161c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.717570 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-sb6hr" podStartSLOduration=3.768164836 podStartE2EDuration="23.717550091s" podCreationTimestamp="2025-12-01 15:03:32 +0000 UTC" firstStartedPulling="2025-12-01 15:03:34.495722504 +0000 UTC m=+1065.013431332" lastFinishedPulling="2025-12-01 15:03:54.445107769 +0000 UTC m=+1084.962816587" observedRunningTime="2025-12-01 15:03:55.706403259 +0000 UTC m=+1086.224112087" watchObservedRunningTime="2025-12-01 15:03:55.717550091 +0000 UTC m=+1086.235258909" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.720626 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.732631 4637 scope.go:117] "RemoveContainer" containerID="e2bc4a7abef980d55bdda9c4c0b11f1fe7fa07d6a79069008bc2f0891fbcc922" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.756058 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db1960f-8ebf-4f82-a6d4-f780b553061a-kube-api-access-ht89r" (OuterVolumeSpecName: "kube-api-access-ht89r") pod "0db1960f-8ebf-4f82-a6d4-f780b553061a" (UID: "0db1960f-8ebf-4f82-a6d4-f780b553061a"). InnerVolumeSpecName "kube-api-access-ht89r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.763182 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cbc5ac-7259-494d-8c1c-5d25eac1161c-kube-api-access-82zp9" (OuterVolumeSpecName: "kube-api-access-82zp9") pod "09cbc5ac-7259-494d-8c1c-5d25eac1161c" (UID: "09cbc5ac-7259-494d-8c1c-5d25eac1161c"). InnerVolumeSpecName "kube-api-access-82zp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.763282 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9689152e-e8b0-40ab-a2e6-d0441160c13a-kube-api-access-25sxn" (OuterVolumeSpecName: "kube-api-access-25sxn") pod "9689152e-e8b0-40ab-a2e6-d0441160c13a" (UID: "9689152e-e8b0-40ab-a2e6-d0441160c13a"). InnerVolumeSpecName "kube-api-access-25sxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.763676 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db1960f-8ebf-4f82-a6d4-f780b553061a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0db1960f-8ebf-4f82-a6d4-f780b553061a" (UID: "0db1960f-8ebf-4f82-a6d4-f780b553061a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.763865 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cbc5ac-7259-494d-8c1c-5d25eac1161c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "09cbc5ac-7259-494d-8c1c-5d25eac1161c" (UID: "09cbc5ac-7259-494d-8c1c-5d25eac1161c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.765112 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9689152e-e8b0-40ab-a2e6-d0441160c13a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9689152e-e8b0-40ab-a2e6-d0441160c13a" (UID: "9689152e-e8b0-40ab-a2e6-d0441160c13a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.799908 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht89r\" (UniqueName: \"kubernetes.io/projected/0db1960f-8ebf-4f82-a6d4-f780b553061a-kube-api-access-ht89r\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.799954 4637 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9689152e-e8b0-40ab-a2e6-d0441160c13a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.799965 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82zp9\" (UniqueName: \"kubernetes.io/projected/09cbc5ac-7259-494d-8c1c-5d25eac1161c-kube-api-access-82zp9\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.799975 4637 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/09cbc5ac-7259-494d-8c1c-5d25eac1161c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.799985 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0db1960f-8ebf-4f82-a6d4-f780b553061a-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.799993 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800004 4637 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0db1960f-8ebf-4f82-a6d4-f780b553061a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800013 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800022 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800031 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0db1960f-8ebf-4f82-a6d4-f780b553061a-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800039 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09cbc5ac-7259-494d-8c1c-5d25eac1161c-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800048 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09cbc5ac-7259-494d-8c1c-5d25eac1161c-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800059 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9689152e-e8b0-40ab-a2e6-d0441160c13a-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800070 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25sxn\" (UniqueName: \"kubernetes.io/projected/9689152e-e8b0-40ab-a2e6-d0441160c13a-kube-api-access-25sxn\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.800080 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9689152e-e8b0-40ab-a2e6-d0441160c13a-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.848419 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zc7pv"] Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.882042 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-x4gwb"] Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.899648 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5lxgt"] Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.968004 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fcb665488-kvv69"] Dec 01 15:03:55 crc kubenswrapper[4637]: I1201 15:03:55.994143 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64585bdddb-h9hvw"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.020421 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vlwgj"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.065531 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.469209 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-nvk98"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.488921 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-nvk98"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.519781 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.560726 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:56 crc kubenswrapper[4637]: W1201 15:03:56.569465 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7137da0_10ce_4ac0_8e2e_658247d8c0b7.slice/crio-58c61f9ff1c33aed23dc72d02618b788f834ba17c8f983c9f920592d2e6f60c8 WatchSource:0}: Error finding container 58c61f9ff1c33aed23dc72d02618b788f834ba17c8f983c9f920592d2e6f60c8: Status 404 returned error can't find the container with id 58c61f9ff1c33aed23dc72d02618b788f834ba17c8f983c9f920592d2e6f60c8 Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.571203 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:56 crc kubenswrapper[4637]: E1201 15:03:56.571559 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" containerName="init" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.571575 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" containerName="init" Dec 01 15:03:56 crc kubenswrapper[4637]: E1201 15:03:56.571591 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerName="glance-httpd" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.571599 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerName="glance-httpd" Dec 01 15:03:56 crc kubenswrapper[4637]: E1201 15:03:56.571610 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerName="glance-log" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.571616 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerName="glance-log" Dec 01 15:03:56 crc kubenswrapper[4637]: E1201 15:03:56.571628 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" containerName="dnsmasq-dns" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.571633 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" containerName="dnsmasq-dns" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.571787 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerName="glance-httpd" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.571805 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" containerName="glance-log" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.571819 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" containerName="dnsmasq-dns" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.576947 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.590267 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.593101 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.599200 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.726355 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.726404 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94sdm\" (UniqueName: \"kubernetes.io/projected/db43f411-7028-4fde-ac84-bc4b00053f4f-kube-api-access-94sdm\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.726435 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.726466 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.726522 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.726547 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.726608 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.726628 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.778668 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fcb665488-kvv69" event={"ID":"269bc165-8fbc-4c63-84ef-96b74d44fc16","Type":"ContainerStarted","Data":"0b122c73f9a4ed4e3c814190af510c75a35fcf60b2cf2478fff9b00698fc33f1"} Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.811143 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lxgt" event={"ID":"cb48315c-5146-4f1e-9d0f-e39186e54083","Type":"ContainerStarted","Data":"11c90b4353b44b69b6f54eb8147adf6d7dd797f8e228d0f551a1cdf67e63c9a1"} Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.821539 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64664649c9-zh8fh"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831366 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831414 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831441 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x4gwb" event={"ID":"31a1344d-109c-400f-ac50-60be5fed1255","Type":"ContainerStarted","Data":"fd84a2019a33a54702d4c9969c818bd8dfc80c52cd893e2fdd02ea040440a7ea"} Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831496 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831525 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831581 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831603 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94sdm\" (UniqueName: \"kubernetes.io/projected/db43f411-7028-4fde-ac84-bc4b00053f4f-kube-api-access-94sdm\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831624 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.831665 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.832280 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.832408 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.832718 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.854009 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.855019 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64664649c9-zh8fh"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.855741 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e298d96-fc57-44a4-9085-f1c89c1be872","Type":"ContainerStarted","Data":"33fc27ba711898ac8782eaf9ae9feeda2096d0b70e9155500fb07742642b5971"} Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.866646 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.867201 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.867967 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zc7pv" event={"ID":"a7137da0-10ce-4ac0-8e2e-658247d8c0b7","Type":"ContainerStarted","Data":"58c61f9ff1c33aed23dc72d02618b788f834ba17c8f983c9f920592d2e6f60c8"} Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.877016 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64585bdddb-h9hvw" event={"ID":"29e960b7-8574-4c38-bb22-67f5a77aaca6","Type":"ContainerStarted","Data":"6f96765cfc7534647988b65e28c0a422686925ee42553d89bd82da2da3a3bcad"} Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.882157 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vlwgj" event={"ID":"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d","Type":"ContainerStarted","Data":"f45e5426049548ad70c3c43ba2139988f04b7cb01db487822d5e69361ea8273c"} Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.891587 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.898269 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f65455bb9-dgzkv"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.903808 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94sdm\" (UniqueName: \"kubernetes.io/projected/db43f411-7028-4fde-ac84-bc4b00053f4f-kube-api-access-94sdm\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.925499 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f65455bb9-dgzkv"] Dec 01 15:03:56 crc kubenswrapper[4637]: I1201 15:03:56.948207 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.082960 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-nvk98" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.083383 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.787783 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db1960f-8ebf-4f82-a6d4-f780b553061a" path="/var/lib/kubelet/pods/0db1960f-8ebf-4f82-a6d4-f780b553061a/volumes" Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.789970 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="912d084a-f1c0-4389-a7ca-59acd0238493" path="/var/lib/kubelet/pods/912d084a-f1c0-4389-a7ca-59acd0238493/volumes" Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.793487 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fed8c1-7b36-4b01-8edb-35fe65411790" path="/var/lib/kubelet/pods/91fed8c1-7b36-4b01-8edb-35fe65411790/volumes" Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.794339 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9689152e-e8b0-40ab-a2e6-d0441160c13a" path="/var/lib/kubelet/pods/9689152e-e8b0-40ab-a2e6-d0441160c13a/volumes" Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.909357 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fcb665488-kvv69" event={"ID":"269bc165-8fbc-4c63-84ef-96b74d44fc16","Type":"ContainerStarted","Data":"7ab471ea25b7de2d6becbf762ccabb88a6a985dd6342e79acf2b5246cfee1637"} Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.914808 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.920617 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lxgt" event={"ID":"cb48315c-5146-4f1e-9d0f-e39186e54083","Type":"ContainerStarted","Data":"b5b22f55974133b109af48d3e32c860a4cf1a0e9836c6dfbb8acc43cc7b8a4d3"} Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.929552 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x4gwb" event={"ID":"31a1344d-109c-400f-ac50-60be5fed1255","Type":"ContainerStarted","Data":"23ccf08a746215d892e4fd11542cc3e54b19e6516642cd4c1d601543bf54b8c0"} Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.942871 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e298d96-fc57-44a4-9085-f1c89c1be872","Type":"ContainerStarted","Data":"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac"} Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.959667 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5lxgt" podStartSLOduration=8.959613798 podStartE2EDuration="8.959613798s" podCreationTimestamp="2025-12-01 15:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:57.954710895 +0000 UTC m=+1088.472419723" watchObservedRunningTime="2025-12-01 15:03:57.959613798 +0000 UTC m=+1088.477322626" Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.963084 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64585bdddb-h9hvw" event={"ID":"29e960b7-8574-4c38-bb22-67f5a77aaca6","Type":"ContainerStarted","Data":"dc893b69f071bb1a0f4f2bb759966b8e137cd8b71a954090eda6934ed381e989"} Dec 01 15:03:57 crc kubenswrapper[4637]: I1201 15:03:57.983550 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-x4gwb" podStartSLOduration=14.983528366 podStartE2EDuration="14.983528366s" podCreationTimestamp="2025-12-01 15:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:03:57.98219648 +0000 UTC m=+1088.499905308" watchObservedRunningTime="2025-12-01 15:03:57.983528366 +0000 UTC m=+1088.501237184" Dec 01 15:03:58 crc kubenswrapper[4637]: W1201 15:03:58.549287 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb43f411_7028_4fde_ac84_bc4b00053f4f.slice/crio-4cd29e1699bd8a1e6dbf772a46b97ab11c6b1d78f3b7117f343c52b52ab76ef9 WatchSource:0}: Error finding container 4cd29e1699bd8a1e6dbf772a46b97ab11c6b1d78f3b7117f343c52b52ab76ef9: Status 404 returned error can't find the container with id 4cd29e1699bd8a1e6dbf772a46b97ab11c6b1d78f3b7117f343c52b52ab76ef9 Dec 01 15:03:59 crc kubenswrapper[4637]: I1201 15:03:59.072435 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db43f411-7028-4fde-ac84-bc4b00053f4f","Type":"ContainerStarted","Data":"4cd29e1699bd8a1e6dbf772a46b97ab11c6b1d78f3b7117f343c52b52ab76ef9"} Dec 01 15:03:59 crc kubenswrapper[4637]: I1201 15:03:59.075469 4637 generic.go:334] "Generic (PLEG): container finished" podID="ecec3227-52bd-4b05-83ac-90218117a222" containerID="d209f4062a14a78cd304c59f8d4191465a3bdbaef6e2e661fe6a33d7c9226ea8" exitCode=0 Dec 01 15:03:59 crc kubenswrapper[4637]: I1201 15:03:59.075612 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sb6hr" event={"ID":"ecec3227-52bd-4b05-83ac-90218117a222","Type":"ContainerDied","Data":"d209f4062a14a78cd304c59f8d4191465a3bdbaef6e2e661fe6a33d7c9226ea8"} Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.087779 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e298d96-fc57-44a4-9085-f1c89c1be872","Type":"ContainerStarted","Data":"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965"} Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.087851 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerName="glance-log" containerID="cri-o://d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac" gracePeriod=30 Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.087884 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerName="glance-httpd" containerID="cri-o://5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965" gracePeriod=30 Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.105208 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db43f411-7028-4fde-ac84-bc4b00053f4f","Type":"ContainerStarted","Data":"a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6"} Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.112336 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.112314243 podStartE2EDuration="19.112314243s" podCreationTimestamp="2025-12-01 15:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:00.111606635 +0000 UTC m=+1090.629315463" watchObservedRunningTime="2025-12-01 15:04:00.112314243 +0000 UTC m=+1090.630023071" Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.133379 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64585bdddb-h9hvw" event={"ID":"29e960b7-8574-4c38-bb22-67f5a77aaca6","Type":"ContainerStarted","Data":"d5804f49e6c3c4ca3bca0e4b57289f3cace1c1cf7586c4a959a71634341596a3"} Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.149259 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerStarted","Data":"7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a"} Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.156425 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-64585bdddb-h9hvw" podStartSLOduration=13.576832905 podStartE2EDuration="14.156408709s" podCreationTimestamp="2025-12-01 15:03:46 +0000 UTC" firstStartedPulling="2025-12-01 15:03:56.518299641 +0000 UTC m=+1087.036008469" lastFinishedPulling="2025-12-01 15:03:57.097875445 +0000 UTC m=+1087.615584273" observedRunningTime="2025-12-01 15:04:00.153406667 +0000 UTC m=+1090.671115485" watchObservedRunningTime="2025-12-01 15:04:00.156408709 +0000 UTC m=+1090.674117537" Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.168398 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fcb665488-kvv69" event={"ID":"269bc165-8fbc-4c63-84ef-96b74d44fc16","Type":"ContainerStarted","Data":"f9adb1fa696c8dec3b59dca580a080de5fa5eeef5dd0fe06d3b4a9fafab67c10"} Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.223289 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-fcb665488-kvv69" podStartSLOduration=13.669852384 podStartE2EDuration="14.223264891s" podCreationTimestamp="2025-12-01 15:03:46 +0000 UTC" firstStartedPulling="2025-12-01 15:03:56.591856503 +0000 UTC m=+1087.109565331" lastFinishedPulling="2025-12-01 15:03:57.14526901 +0000 UTC m=+1087.662977838" observedRunningTime="2025-12-01 15:04:00.206533627 +0000 UTC m=+1090.724242455" watchObservedRunningTime="2025-12-01 15:04:00.223264891 +0000 UTC m=+1090.740973719" Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.834395 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sb6hr" Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.955170 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.989097 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-config-data\") pod \"ecec3227-52bd-4b05-83ac-90218117a222\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.989168 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xs9s\" (UniqueName: \"kubernetes.io/projected/ecec3227-52bd-4b05-83ac-90218117a222-kube-api-access-7xs9s\") pod \"ecec3227-52bd-4b05-83ac-90218117a222\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.990218 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-scripts\") pod \"ecec3227-52bd-4b05-83ac-90218117a222\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.990255 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-combined-ca-bundle\") pod \"ecec3227-52bd-4b05-83ac-90218117a222\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.990281 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecec3227-52bd-4b05-83ac-90218117a222-logs\") pod \"ecec3227-52bd-4b05-83ac-90218117a222\" (UID: \"ecec3227-52bd-4b05-83ac-90218117a222\") " Dec 01 15:04:00 crc kubenswrapper[4637]: I1201 15:04:00.994054 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecec3227-52bd-4b05-83ac-90218117a222-logs" (OuterVolumeSpecName: "logs") pod "ecec3227-52bd-4b05-83ac-90218117a222" (UID: "ecec3227-52bd-4b05-83ac-90218117a222"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.035750 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-scripts" (OuterVolumeSpecName: "scripts") pod "ecec3227-52bd-4b05-83ac-90218117a222" (UID: "ecec3227-52bd-4b05-83ac-90218117a222"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.036361 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecec3227-52bd-4b05-83ac-90218117a222-kube-api-access-7xs9s" (OuterVolumeSpecName: "kube-api-access-7xs9s") pod "ecec3227-52bd-4b05-83ac-90218117a222" (UID: "ecec3227-52bd-4b05-83ac-90218117a222"). InnerVolumeSpecName "kube-api-access-7xs9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.057977 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-config-data" (OuterVolumeSpecName: "config-data") pod "ecec3227-52bd-4b05-83ac-90218117a222" (UID: "ecec3227-52bd-4b05-83ac-90218117a222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.092708 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-combined-ca-bundle\") pod \"2e298d96-fc57-44a4-9085-f1c89c1be872\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.092898 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-scripts\") pod \"2e298d96-fc57-44a4-9085-f1c89c1be872\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.092989 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-logs\") pod \"2e298d96-fc57-44a4-9085-f1c89c1be872\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.093025 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbsxf\" (UniqueName: \"kubernetes.io/projected/2e298d96-fc57-44a4-9085-f1c89c1be872-kube-api-access-jbsxf\") pod \"2e298d96-fc57-44a4-9085-f1c89c1be872\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.093045 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-config-data\") pod \"2e298d96-fc57-44a4-9085-f1c89c1be872\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.093085 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-httpd-run\") pod \"2e298d96-fc57-44a4-9085-f1c89c1be872\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.093146 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2e298d96-fc57-44a4-9085-f1c89c1be872\" (UID: \"2e298d96-fc57-44a4-9085-f1c89c1be872\") " Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.094241 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecec3227-52bd-4b05-83ac-90218117a222-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.094262 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.094277 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xs9s\" (UniqueName: \"kubernetes.io/projected/ecec3227-52bd-4b05-83ac-90218117a222-kube-api-access-7xs9s\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.094289 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.095802 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-logs" (OuterVolumeSpecName: "logs") pod "2e298d96-fc57-44a4-9085-f1c89c1be872" (UID: "2e298d96-fc57-44a4-9085-f1c89c1be872"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.096653 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2e298d96-fc57-44a4-9085-f1c89c1be872" (UID: "2e298d96-fc57-44a4-9085-f1c89c1be872"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.103911 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecec3227-52bd-4b05-83ac-90218117a222" (UID: "ecec3227-52bd-4b05-83ac-90218117a222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.118276 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-scripts" (OuterVolumeSpecName: "scripts") pod "2e298d96-fc57-44a4-9085-f1c89c1be872" (UID: "2e298d96-fc57-44a4-9085-f1c89c1be872"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.121433 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e298d96-fc57-44a4-9085-f1c89c1be872-kube-api-access-jbsxf" (OuterVolumeSpecName: "kube-api-access-jbsxf") pod "2e298d96-fc57-44a4-9085-f1c89c1be872" (UID: "2e298d96-fc57-44a4-9085-f1c89c1be872"). InnerVolumeSpecName "kube-api-access-jbsxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.121593 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "2e298d96-fc57-44a4-9085-f1c89c1be872" (UID: "2e298d96-fc57-44a4-9085-f1c89c1be872"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.148636 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e298d96-fc57-44a4-9085-f1c89c1be872" (UID: "2e298d96-fc57-44a4-9085-f1c89c1be872"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.194419 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sb6hr" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.200723 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sb6hr" event={"ID":"ecec3227-52bd-4b05-83ac-90218117a222","Type":"ContainerDied","Data":"635f9aa7fa26165fcb77db2b164d2e0b58ab2bdd41305493af81cad4ebdc6537"} Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.206585 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="635f9aa7fa26165fcb77db2b164d2e0b58ab2bdd41305493af81cad4ebdc6537" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.207207 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.207260 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecec3227-52bd-4b05-83ac-90218117a222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.207280 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.207294 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbsxf\" (UniqueName: \"kubernetes.io/projected/2e298d96-fc57-44a4-9085-f1c89c1be872-kube-api-access-jbsxf\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.207306 4637 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e298d96-fc57-44a4-9085-f1c89c1be872-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.207363 4637 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.207377 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.238291 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-config-data" (OuterVolumeSpecName: "config-data") pod "2e298d96-fc57-44a4-9085-f1c89c1be872" (UID: "2e298d96-fc57-44a4-9085-f1c89c1be872"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.258970 4637 generic.go:334] "Generic (PLEG): container finished" podID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerID="5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965" exitCode=143 Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.258998 4637 generic.go:334] "Generic (PLEG): container finished" podID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerID="d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac" exitCode=143 Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.260123 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.260290 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e298d96-fc57-44a4-9085-f1c89c1be872","Type":"ContainerDied","Data":"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965"} Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.260409 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e298d96-fc57-44a4-9085-f1c89c1be872","Type":"ContainerDied","Data":"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac"} Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.260430 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e298d96-fc57-44a4-9085-f1c89c1be872","Type":"ContainerDied","Data":"33fc27ba711898ac8782eaf9ae9feeda2096d0b70e9155500fb07742642b5971"} Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.260457 4637 scope.go:117] "RemoveContainer" containerID="5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.309001 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85bcc8d488-896bl"] Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.309088 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e298d96-fc57-44a4-9085-f1c89c1be872-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: E1201 15:04:01.313004 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerName="glance-log" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.313074 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerName="glance-log" Dec 01 15:04:01 crc kubenswrapper[4637]: E1201 15:04:01.313130 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecec3227-52bd-4b05-83ac-90218117a222" containerName="placement-db-sync" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.313142 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecec3227-52bd-4b05-83ac-90218117a222" containerName="placement-db-sync" Dec 01 15:04:01 crc kubenswrapper[4637]: E1201 15:04:01.314255 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerName="glance-httpd" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.314269 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerName="glance-httpd" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.314659 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerName="glance-log" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.314685 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecec3227-52bd-4b05-83ac-90218117a222" containerName="placement-db-sync" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.314706 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" containerName="glance-httpd" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.326374 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.327124 4637 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.334323 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.334526 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.334725 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z6vzs" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.335017 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.335122 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.337873 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85bcc8d488-896bl"] Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.354649 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.397049 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.415199 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-internal-tls-certs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.415287 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-scripts\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.415372 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74bff823-e398-4a06-a477-d98060ddad39-logs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.415495 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzh26\" (UniqueName: \"kubernetes.io/projected/74bff823-e398-4a06-a477-d98060ddad39-kube-api-access-kzh26\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.415675 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-public-tls-certs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.415870 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-config-data\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.415906 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-combined-ca-bundle\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.415996 4637 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.450306 4637 scope.go:117] "RemoveContainer" containerID="d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.493109 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.494891 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.502208 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.502430 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.519986 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-scripts\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.520035 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74bff823-e398-4a06-a477-d98060ddad39-logs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.520075 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzh26\" (UniqueName: \"kubernetes.io/projected/74bff823-e398-4a06-a477-d98060ddad39-kube-api-access-kzh26\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.520136 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-public-tls-certs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.520196 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-config-data\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.520214 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-combined-ca-bundle\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.520249 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-internal-tls-certs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.521387 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.522441 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74bff823-e398-4a06-a477-d98060ddad39-logs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.541088 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-config-data\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.546083 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-internal-tls-certs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.546275 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-public-tls-certs\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.559992 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-scripts\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.560187 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzh26\" (UniqueName: \"kubernetes.io/projected/74bff823-e398-4a06-a477-d98060ddad39-kube-api-access-kzh26\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.560708 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74bff823-e398-4a06-a477-d98060ddad39-combined-ca-bundle\") pod \"placement-85bcc8d488-896bl\" (UID: \"74bff823-e398-4a06-a477-d98060ddad39\") " pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.626371 4637 scope.go:117] "RemoveContainer" containerID="5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965" Dec 01 15:04:01 crc kubenswrapper[4637]: E1201 15:04:01.630337 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965\": container with ID starting with 5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965 not found: ID does not exist" containerID="5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.630398 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965"} err="failed to get container status \"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965\": rpc error: code = NotFound desc = could not find container \"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965\": container with ID starting with 5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965 not found: ID does not exist" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.630433 4637 scope.go:117] "RemoveContainer" containerID="d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.630826 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.630872 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-config-data\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.630937 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-logs\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.631081 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.631113 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlhp\" (UniqueName: \"kubernetes.io/projected/94132f58-a470-4b02-acc0-f59d994e07ea-kube-api-access-tjlhp\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.631137 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.631182 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.631267 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-scripts\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: E1201 15:04:01.635961 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac\": container with ID starting with d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac not found: ID does not exist" containerID="d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.636030 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac"} err="failed to get container status \"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac\": rpc error: code = NotFound desc = could not find container \"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac\": container with ID starting with d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac not found: ID does not exist" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.636064 4637 scope.go:117] "RemoveContainer" containerID="5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.647768 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965"} err="failed to get container status \"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965\": rpc error: code = NotFound desc = could not find container \"5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965\": container with ID starting with 5cf25168a057ba4a56019d7ceabd539e4876523898d6d2a3bea59d4f882a2965 not found: ID does not exist" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.647821 4637 scope.go:117] "RemoveContainer" containerID="d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.648904 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac"} err="failed to get container status \"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac\": rpc error: code = NotFound desc = could not find container \"d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac\": container with ID starting with d0197255adce68c41a63805d52e97aabdabde04a0c16cb11eeb1e5b9719f79ac not found: ID does not exist" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.681567 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.732696 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.732752 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjlhp\" (UniqueName: \"kubernetes.io/projected/94132f58-a470-4b02-acc0-f59d994e07ea-kube-api-access-tjlhp\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.732778 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.732804 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.732856 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-scripts\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.732922 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.732960 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-config-data\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.732978 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-logs\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.733176 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.733367 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-logs\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.735274 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.738612 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.748427 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.749735 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-config-data\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.750607 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-scripts\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.775229 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjlhp\" (UniqueName: \"kubernetes.io/projected/94132f58-a470-4b02-acc0-f59d994e07ea-kube-api-access-tjlhp\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.786156 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " pod="openstack/glance-default-external-api-0" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.799549 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e298d96-fc57-44a4-9085-f1c89c1be872" path="/var/lib/kubelet/pods/2e298d96-fc57-44a4-9085-f1c89c1be872/volumes" Dec 01 15:04:01 crc kubenswrapper[4637]: I1201 15:04:01.948848 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:04:02 crc kubenswrapper[4637]: I1201 15:04:02.388041 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db43f411-7028-4fde-ac84-bc4b00053f4f","Type":"ContainerStarted","Data":"f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8"} Dec 01 15:04:02 crc kubenswrapper[4637]: I1201 15:04:02.575791 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.57576818 podStartE2EDuration="6.57576818s" podCreationTimestamp="2025-12-01 15:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:02.437079612 +0000 UTC m=+1092.954788440" watchObservedRunningTime="2025-12-01 15:04:02.57576818 +0000 UTC m=+1093.093477008" Dec 01 15:04:02 crc kubenswrapper[4637]: I1201 15:04:02.587475 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85bcc8d488-896bl"] Dec 01 15:04:02 crc kubenswrapper[4637]: I1201 15:04:02.826725 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:04:03 crc kubenswrapper[4637]: I1201 15:04:03.473486 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bcc8d488-896bl" event={"ID":"74bff823-e398-4a06-a477-d98060ddad39","Type":"ContainerStarted","Data":"77a55efea59e989a5064d822a4a49f0c71ecb49b531f0ab8ffa106d5558c31be"} Dec 01 15:04:03 crc kubenswrapper[4637]: I1201 15:04:03.473958 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bcc8d488-896bl" event={"ID":"74bff823-e398-4a06-a477-d98060ddad39","Type":"ContainerStarted","Data":"90e70214bc19d776e0775ca8ba27af03ab4332ee9a90fdf85bbedeae2490673a"} Dec 01 15:04:03 crc kubenswrapper[4637]: I1201 15:04:03.477495 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94132f58-a470-4b02-acc0-f59d994e07ea","Type":"ContainerStarted","Data":"5d2e7ac10ba69658226b6d05d0796f194980156be1731ee7eb474f8836bf3c2f"} Dec 01 15:04:04 crc kubenswrapper[4637]: I1201 15:04:04.517576 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94132f58-a470-4b02-acc0-f59d994e07ea","Type":"ContainerStarted","Data":"a03150ec6f1cb47f96ac044d3d9982eebaa64693b5dfeccf4951fba1b53460c7"} Dec 01 15:04:04 crc kubenswrapper[4637]: I1201 15:04:04.519843 4637 generic.go:334] "Generic (PLEG): container finished" podID="cb48315c-5146-4f1e-9d0f-e39186e54083" containerID="b5b22f55974133b109af48d3e32c860a4cf1a0e9836c6dfbb8acc43cc7b8a4d3" exitCode=0 Dec 01 15:04:04 crc kubenswrapper[4637]: I1201 15:04:04.519911 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lxgt" event={"ID":"cb48315c-5146-4f1e-9d0f-e39186e54083","Type":"ContainerDied","Data":"b5b22f55974133b109af48d3e32c860a4cf1a0e9836c6dfbb8acc43cc7b8a4d3"} Dec 01 15:04:04 crc kubenswrapper[4637]: I1201 15:04:04.526268 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85bcc8d488-896bl" event={"ID":"74bff823-e398-4a06-a477-d98060ddad39","Type":"ContainerStarted","Data":"2e96766c21140a0e0570990e6457dfaa12c1ccef2bfaf6c4a9d1727d6a219b46"} Dec 01 15:04:04 crc kubenswrapper[4637]: I1201 15:04:04.527014 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:04 crc kubenswrapper[4637]: I1201 15:04:04.527069 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:04 crc kubenswrapper[4637]: I1201 15:04:04.572985 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85bcc8d488-896bl" podStartSLOduration=3.572953262 podStartE2EDuration="3.572953262s" podCreationTimestamp="2025-12-01 15:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:04.559922649 +0000 UTC m=+1095.077631477" watchObservedRunningTime="2025-12-01 15:04:04.572953262 +0000 UTC m=+1095.090662090" Dec 01 15:04:06 crc kubenswrapper[4637]: I1201 15:04:06.494011 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:04:06 crc kubenswrapper[4637]: I1201 15:04:06.494170 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:04:06 crc kubenswrapper[4637]: I1201 15:04:06.720751 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:04:06 crc kubenswrapper[4637]: I1201 15:04:06.720832 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:04:07 crc kubenswrapper[4637]: I1201 15:04:07.084538 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 15:04:07 crc kubenswrapper[4637]: I1201 15:04:07.086679 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 15:04:07 crc kubenswrapper[4637]: I1201 15:04:07.136811 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 15:04:07 crc kubenswrapper[4637]: I1201 15:04:07.140044 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 15:04:07 crc kubenswrapper[4637]: I1201 15:04:07.568308 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 15:04:07 crc kubenswrapper[4637]: I1201 15:04:07.568357 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.878083 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.956213 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-credential-keys\") pod \"cb48315c-5146-4f1e-9d0f-e39186e54083\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.956261 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-config-data\") pod \"cb48315c-5146-4f1e-9d0f-e39186e54083\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.956284 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-combined-ca-bundle\") pod \"cb48315c-5146-4f1e-9d0f-e39186e54083\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.956415 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-fernet-keys\") pod \"cb48315c-5146-4f1e-9d0f-e39186e54083\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.956506 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsf9z\" (UniqueName: \"kubernetes.io/projected/cb48315c-5146-4f1e-9d0f-e39186e54083-kube-api-access-jsf9z\") pod \"cb48315c-5146-4f1e-9d0f-e39186e54083\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.956534 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-scripts\") pod \"cb48315c-5146-4f1e-9d0f-e39186e54083\" (UID: \"cb48315c-5146-4f1e-9d0f-e39186e54083\") " Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.964873 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-scripts" (OuterVolumeSpecName: "scripts") pod "cb48315c-5146-4f1e-9d0f-e39186e54083" (UID: "cb48315c-5146-4f1e-9d0f-e39186e54083"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.966043 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cb48315c-5146-4f1e-9d0f-e39186e54083" (UID: "cb48315c-5146-4f1e-9d0f-e39186e54083"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.971272 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cb48315c-5146-4f1e-9d0f-e39186e54083" (UID: "cb48315c-5146-4f1e-9d0f-e39186e54083"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:09 crc kubenswrapper[4637]: I1201 15:04:09.975123 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb48315c-5146-4f1e-9d0f-e39186e54083-kube-api-access-jsf9z" (OuterVolumeSpecName: "kube-api-access-jsf9z") pod "cb48315c-5146-4f1e-9d0f-e39186e54083" (UID: "cb48315c-5146-4f1e-9d0f-e39186e54083"). InnerVolumeSpecName "kube-api-access-jsf9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.019223 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-config-data" (OuterVolumeSpecName: "config-data") pod "cb48315c-5146-4f1e-9d0f-e39186e54083" (UID: "cb48315c-5146-4f1e-9d0f-e39186e54083"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.044381 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb48315c-5146-4f1e-9d0f-e39186e54083" (UID: "cb48315c-5146-4f1e-9d0f-e39186e54083"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.060470 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsf9z\" (UniqueName: \"kubernetes.io/projected/cb48315c-5146-4f1e-9d0f-e39186e54083-kube-api-access-jsf9z\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.060503 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.060513 4637 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.060522 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.060531 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.060539 4637 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cb48315c-5146-4f1e-9d0f-e39186e54083-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.607149 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lxgt" event={"ID":"cb48315c-5146-4f1e-9d0f-e39186e54083","Type":"ContainerDied","Data":"11c90b4353b44b69b6f54eb8147adf6d7dd797f8e228d0f551a1cdf67e63c9a1"} Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.607239 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c90b4353b44b69b6f54eb8147adf6d7dd797f8e228d0f551a1cdf67e63c9a1" Dec 01 15:04:10 crc kubenswrapper[4637]: I1201 15:04:10.607190 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lxgt" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.031241 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f896d59db-mf67s"] Dec 01 15:04:11 crc kubenswrapper[4637]: E1201 15:04:11.032273 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb48315c-5146-4f1e-9d0f-e39186e54083" containerName="keystone-bootstrap" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.032289 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb48315c-5146-4f1e-9d0f-e39186e54083" containerName="keystone-bootstrap" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.032494 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb48315c-5146-4f1e-9d0f-e39186e54083" containerName="keystone-bootstrap" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.033293 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.036647 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.037921 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.038122 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jfpqw" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.038247 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.038390 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.038595 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.056719 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f896d59db-mf67s"] Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.096302 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-internal-tls-certs\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.096586 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-fernet-keys\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.096669 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-combined-ca-bundle\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.096759 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-scripts\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.096832 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-public-tls-certs\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.096906 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-credential-keys\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.097037 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-config-data\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.097131 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvv6g\" (UniqueName: \"kubernetes.io/projected/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-kube-api-access-mvv6g\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.200063 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-combined-ca-bundle\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.200386 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-scripts\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.201089 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-public-tls-certs\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.201196 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-credential-keys\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.201321 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-config-data\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.201416 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvv6g\" (UniqueName: \"kubernetes.io/projected/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-kube-api-access-mvv6g\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.201548 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-internal-tls-certs\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.201628 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-fernet-keys\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.213135 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-config-data\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.214701 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-public-tls-certs\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.216436 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-internal-tls-certs\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.219802 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-credential-keys\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.219882 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-scripts\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.223023 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-fernet-keys\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.226523 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-combined-ca-bundle\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.237767 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvv6g\" (UniqueName: \"kubernetes.io/projected/0e2b0a1d-1624-43e9-8f38-9918fa4b0b85-kube-api-access-mvv6g\") pod \"keystone-6f896d59db-mf67s\" (UID: \"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85\") " pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:11 crc kubenswrapper[4637]: I1201 15:04:11.360683 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:12 crc kubenswrapper[4637]: I1201 15:04:12.286839 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 15:04:12 crc kubenswrapper[4637]: I1201 15:04:12.287423 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 15:04:12 crc kubenswrapper[4637]: I1201 15:04:12.381790 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 15:04:15 crc kubenswrapper[4637]: I1201 15:04:15.613857 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:04:15 crc kubenswrapper[4637]: I1201 15:04:15.614630 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:04:16 crc kubenswrapper[4637]: I1201 15:04:16.496050 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64585bdddb-h9hvw" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 01 15:04:16 crc kubenswrapper[4637]: I1201 15:04:16.724138 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fcb665488-kvv69" podUID="269bc165-8fbc-4c63-84ef-96b74d44fc16" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 01 15:04:22 crc kubenswrapper[4637]: I1201 15:04:22.724169 4637 generic.go:334] "Generic (PLEG): container finished" podID="31a1344d-109c-400f-ac50-60be5fed1255" containerID="23ccf08a746215d892e4fd11542cc3e54b19e6516642cd4c1d601543bf54b8c0" exitCode=0 Dec 01 15:04:22 crc kubenswrapper[4637]: I1201 15:04:22.724231 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x4gwb" event={"ID":"31a1344d-109c-400f-ac50-60be5fed1255","Type":"ContainerDied","Data":"23ccf08a746215d892e4fd11542cc3e54b19e6516642cd4c1d601543bf54b8c0"} Dec 01 15:04:26 crc kubenswrapper[4637]: I1201 15:04:26.702695 4637 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod09cbc5ac-7259-494d-8c1c-5d25eac1161c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod09cbc5ac-7259-494d-8c1c-5d25eac1161c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod09cbc5ac_7259_494d_8c1c_5d25eac1161c.slice" Dec 01 15:04:26 crc kubenswrapper[4637]: E1201 15:04:26.703064 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod09cbc5ac-7259-494d-8c1c-5d25eac1161c] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod09cbc5ac-7259-494d-8c1c-5d25eac1161c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod09cbc5ac_7259_494d_8c1c_5d25eac1161c.slice" pod="openstack/horizon-6bfc57fb6f-w2dxg" podUID="09cbc5ac-7259-494d-8c1c-5d25eac1161c" Dec 01 15:04:26 crc kubenswrapper[4637]: I1201 15:04:26.770406 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bfc57fb6f-w2dxg" Dec 01 15:04:26 crc kubenswrapper[4637]: I1201 15:04:26.845083 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6bfc57fb6f-w2dxg"] Dec 01 15:04:26 crc kubenswrapper[4637]: I1201 15:04:26.857068 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6bfc57fb6f-w2dxg"] Dec 01 15:04:27 crc kubenswrapper[4637]: I1201 15:04:27.781135 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cbc5ac-7259-494d-8c1c-5d25eac1161c" path="/var/lib/kubelet/pods/09cbc5ac-7259-494d-8c1c-5d25eac1161c/volumes" Dec 01 15:04:27 crc kubenswrapper[4637]: I1201 15:04:27.970647 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.085776 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfw9q\" (UniqueName: \"kubernetes.io/projected/31a1344d-109c-400f-ac50-60be5fed1255-kube-api-access-xfw9q\") pod \"31a1344d-109c-400f-ac50-60be5fed1255\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.085909 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-config\") pod \"31a1344d-109c-400f-ac50-60be5fed1255\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.085956 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-combined-ca-bundle\") pod \"31a1344d-109c-400f-ac50-60be5fed1255\" (UID: \"31a1344d-109c-400f-ac50-60be5fed1255\") " Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.094219 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a1344d-109c-400f-ac50-60be5fed1255-kube-api-access-xfw9q" (OuterVolumeSpecName: "kube-api-access-xfw9q") pod "31a1344d-109c-400f-ac50-60be5fed1255" (UID: "31a1344d-109c-400f-ac50-60be5fed1255"). InnerVolumeSpecName "kube-api-access-xfw9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.112912 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-config" (OuterVolumeSpecName: "config") pod "31a1344d-109c-400f-ac50-60be5fed1255" (UID: "31a1344d-109c-400f-ac50-60be5fed1255"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.134914 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31a1344d-109c-400f-ac50-60be5fed1255" (UID: "31a1344d-109c-400f-ac50-60be5fed1255"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.188609 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.188653 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfw9q\" (UniqueName: \"kubernetes.io/projected/31a1344d-109c-400f-ac50-60be5fed1255-kube-api-access-xfw9q\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.188670 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/31a1344d-109c-400f-ac50-60be5fed1255-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.792708 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x4gwb" event={"ID":"31a1344d-109c-400f-ac50-60be5fed1255","Type":"ContainerDied","Data":"fd84a2019a33a54702d4c9969c818bd8dfc80c52cd893e2fdd02ea040440a7ea"} Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.793161 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd84a2019a33a54702d4c9969c818bd8dfc80c52cd893e2fdd02ea040440a7ea" Dec 01 15:04:28 crc kubenswrapper[4637]: I1201 15:04:28.793045 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x4gwb" Dec 01 15:04:28 crc kubenswrapper[4637]: E1201 15:04:28.944562 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31a1344d_109c_400f_ac50_60be5fed1255.slice/crio-fd84a2019a33a54702d4c9969c818bd8dfc80c52cd893e2fdd02ea040440a7ea\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31a1344d_109c_400f_ac50_60be5fed1255.slice\": RecentStats: unable to find data in memory cache]" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.250400 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-sx8n2"] Dec 01 15:04:29 crc kubenswrapper[4637]: E1201 15:04:29.250809 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a1344d-109c-400f-ac50-60be5fed1255" containerName="neutron-db-sync" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.250830 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a1344d-109c-400f-ac50-60be5fed1255" containerName="neutron-db-sync" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.251055 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a1344d-109c-400f-ac50-60be5fed1255" containerName="neutron-db-sync" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.252132 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.265545 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-sx8n2"] Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.423212 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-config\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.423265 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.423310 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s996n\" (UniqueName: \"kubernetes.io/projected/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-kube-api-access-s996n\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.423350 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.423425 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.423468 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.432249 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-76ccd9588d-b65nb"] Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.438387 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.444522 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rptj8" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.448454 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.455070 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.455351 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.463112 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76ccd9588d-b65nb"] Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.526821 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-config\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.526876 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-httpd-config\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.526966 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.527017 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.527044 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-config\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.527065 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.527095 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s996n\" (UniqueName: \"kubernetes.io/projected/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-kube-api-access-s996n\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.527127 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmq7c\" (UniqueName: \"kubernetes.io/projected/4162adc0-1edf-4059-a924-ef743026eda4-kube-api-access-qmq7c\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.527150 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.527186 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-ovndb-tls-certs\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.527204 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-combined-ca-bundle\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.528540 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.531383 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.531845 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.532173 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-config\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.532583 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.583038 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s996n\" (UniqueName: \"kubernetes.io/projected/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-kube-api-access-s996n\") pod \"dnsmasq-dns-5ccc5c4795-sx8n2\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.628660 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmq7c\" (UniqueName: \"kubernetes.io/projected/4162adc0-1edf-4059-a924-ef743026eda4-kube-api-access-qmq7c\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.628730 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-ovndb-tls-certs\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.628751 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-combined-ca-bundle\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.628779 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-config\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.628802 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-httpd-config\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.639043 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-combined-ca-bundle\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.641373 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-httpd-config\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.642091 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-ovndb-tls-certs\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.646626 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-config\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.653334 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmq7c\" (UniqueName: \"kubernetes.io/projected/4162adc0-1edf-4059-a924-ef743026eda4-kube-api-access-qmq7c\") pod \"neutron-76ccd9588d-b65nb\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.783329 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.792674 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.845672 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:04:29 crc kubenswrapper[4637]: I1201 15:04:29.879378 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:30 crc kubenswrapper[4637]: E1201 15:04:30.397087 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 01 15:04:30 crc kubenswrapper[4637]: E1201 15:04:30.397263 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tc85c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vlwgj_openstack(75ffcaf0-9c6e-4f8e-98a0-b9c44529527d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:04:30 crc kubenswrapper[4637]: E1201 15:04:30.402624 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vlwgj" podUID="75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" Dec 01 15:04:30 crc kubenswrapper[4637]: E1201 15:04:30.852731 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-vlwgj" podUID="75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" Dec 01 15:04:30 crc kubenswrapper[4637]: W1201 15:04:30.857299 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e2b0a1d_1624_43e9_8f38_9918fa4b0b85.slice/crio-69ec605722da97a32076cdb3e492af47df06eb1f547edde14e6e7ef0ee684762 WatchSource:0}: Error finding container 69ec605722da97a32076cdb3e492af47df06eb1f547edde14e6e7ef0ee684762: Status 404 returned error can't find the container with id 69ec605722da97a32076cdb3e492af47df06eb1f547edde14e6e7ef0ee684762 Dec 01 15:04:30 crc kubenswrapper[4637]: I1201 15:04:30.920051 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f896d59db-mf67s"] Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.323570 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-sx8n2"] Dec 01 15:04:31 crc kubenswrapper[4637]: W1201 15:04:31.354365 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c249a0_f00c_4b61_930d_e7b28a6a2e6b.slice/crio-de9fac7ad53fc1633224388e212109f835603e0830d6fa7e3b3e752c8e91134f WatchSource:0}: Error finding container de9fac7ad53fc1633224388e212109f835603e0830d6fa7e3b3e752c8e91134f: Status 404 returned error can't find the container with id de9fac7ad53fc1633224388e212109f835603e0830d6fa7e3b3e752c8e91134f Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.431908 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76ccd9588d-b65nb"] Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.890574 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ff56c879c-9gwf6"] Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.901377 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.904365 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ff56c879c-9gwf6"] Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.904768 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.930321 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.938955 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f896d59db-mf67s" event={"ID":"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85","Type":"ContainerStarted","Data":"72089be77173e969781ce898ce6a455f2ffbdfb8cd91dbe4b4dbb02e5275c55a"} Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.939027 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f896d59db-mf67s" event={"ID":"0e2b0a1d-1624-43e9-8f38-9918fa4b0b85","Type":"ContainerStarted","Data":"69ec605722da97a32076cdb3e492af47df06eb1f547edde14e6e7ef0ee684762"} Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.939053 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:31 crc kubenswrapper[4637]: I1201 15:04:31.965739 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94132f58-a470-4b02-acc0-f59d994e07ea","Type":"ContainerStarted","Data":"7420952e58ffa4feeaf35d2110a689b2890a445f6888c993aa4aadf90b4e98f4"} Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.007461 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerStarted","Data":"63eb565e0f828d719bd95cd57a9bd0ac57fe8bbddde200d765c5a7c4e0a7157d"} Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.022999 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-combined-ca-bundle\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.023092 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-httpd-config\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.023253 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-config\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.023284 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-internal-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.023312 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz8th\" (UniqueName: \"kubernetes.io/projected/391b6ea5-6446-4755-9075-904efff48769-kube-api-access-lz8th\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.023352 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-ovndb-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.023397 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-public-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.056052 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=31.056031412 podStartE2EDuration="31.056031412s" podCreationTimestamp="2025-12-01 15:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:31.996764856 +0000 UTC m=+1122.514473684" watchObservedRunningTime="2025-12-01 15:04:32.056031412 +0000 UTC m=+1122.573740240" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.059698 4637 generic.go:334] "Generic (PLEG): container finished" podID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerID="0f1fbf1054ff2f30cfbba79d23b8ad1766f803d7723925037fb50ad0eeabfdd4" exitCode=0 Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.059820 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" event={"ID":"64c249a0-f00c-4b61-930d-e7b28a6a2e6b","Type":"ContainerDied","Data":"0f1fbf1054ff2f30cfbba79d23b8ad1766f803d7723925037fb50ad0eeabfdd4"} Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.059851 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" event={"ID":"64c249a0-f00c-4b61-930d-e7b28a6a2e6b","Type":"ContainerStarted","Data":"de9fac7ad53fc1633224388e212109f835603e0830d6fa7e3b3e752c8e91134f"} Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.087200 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f896d59db-mf67s" podStartSLOduration=21.087176636 podStartE2EDuration="21.087176636s" podCreationTimestamp="2025-12-01 15:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:32.040685436 +0000 UTC m=+1122.558394254" watchObservedRunningTime="2025-12-01 15:04:32.087176636 +0000 UTC m=+1122.604885464" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.099196 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zc7pv" event={"ID":"a7137da0-10ce-4ac0-8e2e-658247d8c0b7","Type":"ContainerStarted","Data":"a2697d3eb84cef1935a77452f8f50e75a2a9834c27e819a3a2a15fcb2ba190a0"} Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.132264 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-config\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.132316 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-internal-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.132339 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz8th\" (UniqueName: \"kubernetes.io/projected/391b6ea5-6446-4755-9075-904efff48769-kube-api-access-lz8th\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.132372 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-ovndb-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.132413 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-public-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.132459 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-combined-ca-bundle\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.132537 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-httpd-config\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.141780 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-internal-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.143043 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-httpd-config\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.156048 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerStarted","Data":"74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507"} Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.159949 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-combined-ca-bundle\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.162820 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-public-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.164011 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-ovndb-tls-certs\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.174491 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/391b6ea5-6446-4755-9075-904efff48769-config\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.183576 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zc7pv" podStartSLOduration=15.396927859 podStartE2EDuration="49.183554778s" podCreationTimestamp="2025-12-01 15:03:43 +0000 UTC" firstStartedPulling="2025-12-01 15:03:56.592140951 +0000 UTC m=+1087.109849779" lastFinishedPulling="2025-12-01 15:04:30.37876787 +0000 UTC m=+1120.896476698" observedRunningTime="2025-12-01 15:04:32.170556606 +0000 UTC m=+1122.688265434" watchObservedRunningTime="2025-12-01 15:04:32.183554778 +0000 UTC m=+1122.701263606" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.234098 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz8th\" (UniqueName: \"kubernetes.io/projected/391b6ea5-6446-4755-9075-904efff48769-kube-api-access-lz8th\") pod \"neutron-ff56c879c-9gwf6\" (UID: \"391b6ea5-6446-4755-9075-904efff48769\") " pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:32 crc kubenswrapper[4637]: I1201 15:04:32.401598 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.132314 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-fcb665488-kvv69" Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.182819 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" event={"ID":"64c249a0-f00c-4b61-930d-e7b28a6a2e6b","Type":"ContainerStarted","Data":"b530e9ff351b8f9ce04a1a3e2f42175ec632526cc906c7861a04417b1e10efb7"} Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.183971 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.188077 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerStarted","Data":"08f5f48d30b8b32e703127f1a7e1645a00d1778edeb91de396ceebe17595ff4f"} Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.188111 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerStarted","Data":"c6e0e664979bfa7d71901b3ce77ee168b8345d3a3223b0d085e7bca984035fa8"} Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.188126 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:33 crc kubenswrapper[4637]: W1201 15:04:33.258231 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod391b6ea5_6446_4755_9075_904efff48769.slice/crio-fb476901d3d3e3a1883d19faae8e336b5544b5ba863aa6496fc1f2a47fd4feb2 WatchSource:0}: Error finding container fb476901d3d3e3a1883d19faae8e336b5544b5ba863aa6496fc1f2a47fd4feb2: Status 404 returned error can't find the container with id fb476901d3d3e3a1883d19faae8e336b5544b5ba863aa6496fc1f2a47fd4feb2 Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.300358 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ff56c879c-9gwf6"] Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.335166 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64585bdddb-h9hvw"] Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.335450 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64585bdddb-h9hvw" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon-log" containerID="cri-o://dc893b69f071bb1a0f4f2bb759966b8e137cd8b71a954090eda6934ed381e989" gracePeriod=30 Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.335593 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64585bdddb-h9hvw" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" containerID="cri-o://d5804f49e6c3c4ca3bca0e4b57289f3cace1c1cf7586c4a959a71634341596a3" gracePeriod=30 Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.339816 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" podStartSLOduration=4.339805441 podStartE2EDuration="4.339805441s" podCreationTimestamp="2025-12-01 15:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:33.272672122 +0000 UTC m=+1123.790380950" watchObservedRunningTime="2025-12-01 15:04:33.339805441 +0000 UTC m=+1123.857514269" Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.385963 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-76ccd9588d-b65nb" podStartSLOduration=4.385919411 podStartE2EDuration="4.385919411s" podCreationTimestamp="2025-12-01 15:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:33.365463426 +0000 UTC m=+1123.883172254" watchObservedRunningTime="2025-12-01 15:04:33.385919411 +0000 UTC m=+1123.903628239" Dec 01 15:04:33 crc kubenswrapper[4637]: I1201 15:04:33.418397 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64585bdddb-h9hvw" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 01 15:04:34 crc kubenswrapper[4637]: I1201 15:04:34.202095 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/0.log" Dec 01 15:04:34 crc kubenswrapper[4637]: I1201 15:04:34.202875 4637 generic.go:334] "Generic (PLEG): container finished" podID="4162adc0-1edf-4059-a924-ef743026eda4" containerID="c6e0e664979bfa7d71901b3ce77ee168b8345d3a3223b0d085e7bca984035fa8" exitCode=1 Dec 01 15:04:34 crc kubenswrapper[4637]: I1201 15:04:34.202968 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerDied","Data":"c6e0e664979bfa7d71901b3ce77ee168b8345d3a3223b0d085e7bca984035fa8"} Dec 01 15:04:34 crc kubenswrapper[4637]: I1201 15:04:34.203580 4637 scope.go:117] "RemoveContainer" containerID="c6e0e664979bfa7d71901b3ce77ee168b8345d3a3223b0d085e7bca984035fa8" Dec 01 15:04:34 crc kubenswrapper[4637]: I1201 15:04:34.213978 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ff56c879c-9gwf6" event={"ID":"391b6ea5-6446-4755-9075-904efff48769","Type":"ContainerStarted","Data":"52ffd48419080baf7920068dfd09f2e74f29f5d6726b4c63befe408d2d53515d"} Dec 01 15:04:34 crc kubenswrapper[4637]: I1201 15:04:34.214039 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ff56c879c-9gwf6" event={"ID":"391b6ea5-6446-4755-9075-904efff48769","Type":"ContainerStarted","Data":"fb476901d3d3e3a1883d19faae8e336b5544b5ba863aa6496fc1f2a47fd4feb2"} Dec 01 15:04:34 crc kubenswrapper[4637]: I1201 15:04:34.894712 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.094521 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85bcc8d488-896bl" Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.239960 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/1.log" Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.241173 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/0.log" Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.251132 4637 generic.go:334] "Generic (PLEG): container finished" podID="4162adc0-1edf-4059-a924-ef743026eda4" containerID="6267ece6d034c205ed970cf866360cfd9a252850edb50de41489c00c3cdafc85" exitCode=1 Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.251269 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerDied","Data":"6267ece6d034c205ed970cf866360cfd9a252850edb50de41489c00c3cdafc85"} Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.251318 4637 scope.go:117] "RemoveContainer" containerID="c6e0e664979bfa7d71901b3ce77ee168b8345d3a3223b0d085e7bca984035fa8" Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.252304 4637 scope.go:117] "RemoveContainer" containerID="6267ece6d034c205ed970cf866360cfd9a252850edb50de41489c00c3cdafc85" Dec 01 15:04:35 crc kubenswrapper[4637]: E1201 15:04:35.252660 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-76ccd9588d-b65nb_openstack(4162adc0-1edf-4059-a924-ef743026eda4)\"" pod="openstack/neutron-76ccd9588d-b65nb" podUID="4162adc0-1edf-4059-a924-ef743026eda4" Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.285306 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ff56c879c-9gwf6" event={"ID":"391b6ea5-6446-4755-9075-904efff48769","Type":"ContainerStarted","Data":"3e591b95ef096705bbb1103b5649118bbd053b2c88737e657e04dff908384127"} Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.285357 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:04:35 crc kubenswrapper[4637]: I1201 15:04:35.348761 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-ff56c879c-9gwf6" podStartSLOduration=4.348724151 podStartE2EDuration="4.348724151s" podCreationTimestamp="2025-12-01 15:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:35.344413354 +0000 UTC m=+1125.862122172" watchObservedRunningTime="2025-12-01 15:04:35.348724151 +0000 UTC m=+1125.866432979" Dec 01 15:04:36 crc kubenswrapper[4637]: I1201 15:04:36.306409 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/1.log" Dec 01 15:04:36 crc kubenswrapper[4637]: I1201 15:04:36.310194 4637 scope.go:117] "RemoveContainer" containerID="6267ece6d034c205ed970cf866360cfd9a252850edb50de41489c00c3cdafc85" Dec 01 15:04:36 crc kubenswrapper[4637]: E1201 15:04:36.310446 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-76ccd9588d-b65nb_openstack(4162adc0-1edf-4059-a924-ef743026eda4)\"" pod="openstack/neutron-76ccd9588d-b65nb" podUID="4162adc0-1edf-4059-a924-ef743026eda4" Dec 01 15:04:36 crc kubenswrapper[4637]: I1201 15:04:36.369186 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:04:37 crc kubenswrapper[4637]: I1201 15:04:37.319207 4637 generic.go:334] "Generic (PLEG): container finished" podID="a7137da0-10ce-4ac0-8e2e-658247d8c0b7" containerID="a2697d3eb84cef1935a77452f8f50e75a2a9834c27e819a3a2a15fcb2ba190a0" exitCode=0 Dec 01 15:04:37 crc kubenswrapper[4637]: I1201 15:04:37.320599 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zc7pv" event={"ID":"a7137da0-10ce-4ac0-8e2e-658247d8c0b7","Type":"ContainerDied","Data":"a2697d3eb84cef1935a77452f8f50e75a2a9834c27e819a3a2a15fcb2ba190a0"} Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.747451 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.812571 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-combined-ca-bundle\") pod \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.812669 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcbkm\" (UniqueName: \"kubernetes.io/projected/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-kube-api-access-rcbkm\") pod \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.812861 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-db-sync-config-data\") pod \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\" (UID: \"a7137da0-10ce-4ac0-8e2e-658247d8c0b7\") " Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.826147 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a7137da0-10ce-4ac0-8e2e-658247d8c0b7" (UID: "a7137da0-10ce-4ac0-8e2e-658247d8c0b7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.826241 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-kube-api-access-rcbkm" (OuterVolumeSpecName: "kube-api-access-rcbkm") pod "a7137da0-10ce-4ac0-8e2e-658247d8c0b7" (UID: "a7137da0-10ce-4ac0-8e2e-658247d8c0b7"). InnerVolumeSpecName "kube-api-access-rcbkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.916356 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcbkm\" (UniqueName: \"kubernetes.io/projected/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-kube-api-access-rcbkm\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.916403 4637 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:38 crc kubenswrapper[4637]: I1201 15:04:38.957197 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7137da0-10ce-4ac0-8e2e-658247d8c0b7" (UID: "a7137da0-10ce-4ac0-8e2e-658247d8c0b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.019481 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7137da0-10ce-4ac0-8e2e-658247d8c0b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.340712 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zc7pv" event={"ID":"a7137da0-10ce-4ac0-8e2e-658247d8c0b7","Type":"ContainerDied","Data":"58c61f9ff1c33aed23dc72d02618b788f834ba17c8f983c9f920592d2e6f60c8"} Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.341247 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c61f9ff1c33aed23dc72d02618b788f834ba17c8f983c9f920592d2e6f60c8" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.341339 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zc7pv" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.616467 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d5b77c96f-nz2mk"] Dec 01 15:04:39 crc kubenswrapper[4637]: E1201 15:04:39.616882 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7137da0-10ce-4ac0-8e2e-658247d8c0b7" containerName="barbican-db-sync" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.616900 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7137da0-10ce-4ac0-8e2e-658247d8c0b7" containerName="barbican-db-sync" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.617082 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7137da0-10ce-4ac0-8e2e-658247d8c0b7" containerName="barbican-db-sync" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.618026 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.624348 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7q9nd" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.624895 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.625101 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.673751 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d5b77c96f-nz2mk"] Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.765955 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-combined-ca-bundle\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.766895 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42sk9\" (UniqueName: \"kubernetes.io/projected/395c12b6-6b37-4ed6-93fb-65937fa99e65-kube-api-access-42sk9\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.770236 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-config-data-custom\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.770342 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-config-data\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.770385 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395c12b6-6b37-4ed6-93fb-65937fa99e65-logs\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.887167 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6fcc69568b-hmqt6"] Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.898099 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6fcc69568b-hmqt6"] Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.889466 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-combined-ca-bundle\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.898831 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42sk9\" (UniqueName: \"kubernetes.io/projected/395c12b6-6b37-4ed6-93fb-65937fa99e65-kube-api-access-42sk9\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.899004 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-config-data-custom\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.899426 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-config-data\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.927011 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395c12b6-6b37-4ed6-93fb-65937fa99e65-logs\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.898742 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.926179 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.928486 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395c12b6-6b37-4ed6-93fb-65937fa99e65-logs\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.930451 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-config-data-custom\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.930964 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.933757 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-combined-ca-bundle\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.962221 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42sk9\" (UniqueName: \"kubernetes.io/projected/395c12b6-6b37-4ed6-93fb-65937fa99e65-kube-api-access-42sk9\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.977318 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-sx8n2"] Dec 01 15:04:39 crc kubenswrapper[4637]: I1201 15:04:39.982058 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395c12b6-6b37-4ed6-93fb-65937fa99e65-config-data\") pod \"barbican-worker-7d5b77c96f-nz2mk\" (UID: \"395c12b6-6b37-4ed6-93fb-65937fa99e65\") " pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.028944 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-logs\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.029053 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk427\" (UniqueName: \"kubernetes.io/projected/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-kube-api-access-qk427\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.029158 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-config-data\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.029249 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-combined-ca-bundle\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.029336 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-config-data-custom\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.036307 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-6h6t8"] Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.038086 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.054147 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-6h6t8"] Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.151660 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78bkv\" (UniqueName: \"kubernetes.io/projected/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-kube-api-access-78bkv\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152245 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-combined-ca-bundle\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152293 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152326 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-config-data-custom\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152358 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152412 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152442 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-logs\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152466 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-config\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152494 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk427\" (UniqueName: \"kubernetes.io/projected/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-kube-api-access-qk427\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152519 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-config-data\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.152559 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-svc\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.153335 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-logs\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.160058 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-config-data\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.160206 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64585bdddb-h9hvw" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:34536->10.217.0.147:8443: read: connection reset by peer" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.168486 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-combined-ca-bundle\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.198382 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-config-data-custom\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.203561 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk427\" (UniqueName: \"kubernetes.io/projected/4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04-kube-api-access-qk427\") pod \"barbican-keystone-listener-6fcc69568b-hmqt6\" (UID: \"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04\") " pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.210226 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7756bcbc94-flscs"] Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.211858 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.216342 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.229632 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7756bcbc94-flscs"] Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.255581 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-config\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.255701 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-svc\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.255730 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78bkv\" (UniqueName: \"kubernetes.io/projected/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-kube-api-access-78bkv\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.255783 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.255828 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.255877 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.257135 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.257265 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-svc\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.257495 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.257859 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-config\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.258578 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.265086 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5b77c96f-nz2mk" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.279890 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78bkv\" (UniqueName: \"kubernetes.io/projected/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-kube-api-access-78bkv\") pod \"dnsmasq-dns-688c87cc99-6h6t8\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.358096 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-combined-ca-bundle\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.358147 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data-custom\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.358180 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf391ef-d734-4a25-9726-c7254f4abc1a-logs\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.358208 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.358285 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxsz5\" (UniqueName: \"kubernetes.io/projected/bdf391ef-d734-4a25-9726-c7254f4abc1a-kube-api-access-xxsz5\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.364232 4637 generic.go:334] "Generic (PLEG): container finished" podID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerID="d5804f49e6c3c4ca3bca0e4b57289f3cace1c1cf7586c4a959a71634341596a3" exitCode=0 Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.364589 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" podUID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerName="dnsmasq-dns" containerID="cri-o://b530e9ff351b8f9ce04a1a3e2f42175ec632526cc906c7861a04417b1e10efb7" gracePeriod=10 Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.364789 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64585bdddb-h9hvw" event={"ID":"29e960b7-8574-4c38-bb22-67f5a77aaca6","Type":"ContainerDied","Data":"d5804f49e6c3c4ca3bca0e4b57289f3cace1c1cf7586c4a959a71634341596a3"} Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.381071 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.419544 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.460765 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-combined-ca-bundle\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.460867 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data-custom\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.461137 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf391ef-d734-4a25-9726-c7254f4abc1a-logs\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.461199 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.461328 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxsz5\" (UniqueName: \"kubernetes.io/projected/bdf391ef-d734-4a25-9726-c7254f4abc1a-kube-api-access-xxsz5\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.461962 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf391ef-d734-4a25-9726-c7254f4abc1a-logs\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.465731 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-combined-ca-bundle\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.470673 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data-custom\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.471764 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.486701 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxsz5\" (UniqueName: \"kubernetes.io/projected/bdf391ef-d734-4a25-9726-c7254f4abc1a-kube-api-access-xxsz5\") pod \"barbican-api-7756bcbc94-flscs\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:40 crc kubenswrapper[4637]: I1201 15:04:40.658137 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:41 crc kubenswrapper[4637]: I1201 15:04:41.378467 4637 generic.go:334] "Generic (PLEG): container finished" podID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerID="b530e9ff351b8f9ce04a1a3e2f42175ec632526cc906c7861a04417b1e10efb7" exitCode=0 Dec 01 15:04:41 crc kubenswrapper[4637]: I1201 15:04:41.379198 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" event={"ID":"64c249a0-f00c-4b61-930d-e7b28a6a2e6b","Type":"ContainerDied","Data":"b530e9ff351b8f9ce04a1a3e2f42175ec632526cc906c7861a04417b1e10efb7"} Dec 01 15:04:41 crc kubenswrapper[4637]: I1201 15:04:41.949827 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 15:04:41 crc kubenswrapper[4637]: I1201 15:04:41.949882 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 15:04:41 crc kubenswrapper[4637]: I1201 15:04:41.996589 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 15:04:42 crc kubenswrapper[4637]: I1201 15:04:42.065383 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 15:04:42 crc kubenswrapper[4637]: I1201 15:04:42.388326 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 15:04:42 crc kubenswrapper[4637]: I1201 15:04:42.388389 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.570256 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c95959774-tk5fr"] Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.572123 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.575476 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.575739 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.612121 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c95959774-tk5fr"] Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.653363 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-internal-tls-certs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.653512 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgdxs\" (UniqueName: \"kubernetes.io/projected/57082a3e-c5e1-4926-a5b1-306d0becae0c-kube-api-access-sgdxs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.653577 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57082a3e-c5e1-4926-a5b1-306d0becae0c-logs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.653707 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-public-tls-certs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.653782 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-config-data\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.653841 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-combined-ca-bundle\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.653889 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-config-data-custom\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.756000 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-public-tls-certs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.756089 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-config-data\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.756112 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-combined-ca-bundle\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.756141 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-config-data-custom\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.756182 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-internal-tls-certs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.756203 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgdxs\" (UniqueName: \"kubernetes.io/projected/57082a3e-c5e1-4926-a5b1-306d0becae0c-kube-api-access-sgdxs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.756250 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57082a3e-c5e1-4926-a5b1-306d0becae0c-logs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.756638 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57082a3e-c5e1-4926-a5b1-306d0becae0c-logs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.767979 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-internal-tls-certs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.771713 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-combined-ca-bundle\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.777316 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-public-tls-certs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.777347 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-config-data-custom\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.778544 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgdxs\" (UniqueName: \"kubernetes.io/projected/57082a3e-c5e1-4926-a5b1-306d0becae0c-kube-api-access-sgdxs\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.780484 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57082a3e-c5e1-4926-a5b1-306d0becae0c-config-data\") pod \"barbican-api-7c95959774-tk5fr\" (UID: \"57082a3e-c5e1-4926-a5b1-306d0becae0c\") " pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.907196 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:43 crc kubenswrapper[4637]: I1201 15:04:43.968865 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f896d59db-mf67s" Dec 01 15:04:44 crc kubenswrapper[4637]: I1201 15:04:44.886448 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" podUID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Dec 01 15:04:45 crc kubenswrapper[4637]: I1201 15:04:45.614155 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:04:45 crc kubenswrapper[4637]: I1201 15:04:45.614509 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:04:45 crc kubenswrapper[4637]: I1201 15:04:45.643908 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 15:04:45 crc kubenswrapper[4637]: I1201 15:04:45.644039 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 15:04:45 crc kubenswrapper[4637]: I1201 15:04:45.858112 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.504211 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64585bdddb-h9hvw" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.567011 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.569266 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.579075 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.580664 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-797xk" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.580914 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.581051 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.653656 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421d907a-c7b0-4109-8d01-e725459215b9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.653762 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jwgc\" (UniqueName: \"kubernetes.io/projected/421d907a-c7b0-4109-8d01-e725459215b9-kube-api-access-8jwgc\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.653790 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421d907a-c7b0-4109-8d01-e725459215b9-openstack-config\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.653862 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421d907a-c7b0-4109-8d01-e725459215b9-openstack-config-secret\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.756018 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421d907a-c7b0-4109-8d01-e725459215b9-openstack-config-secret\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.756163 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421d907a-c7b0-4109-8d01-e725459215b9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.756214 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jwgc\" (UniqueName: \"kubernetes.io/projected/421d907a-c7b0-4109-8d01-e725459215b9-kube-api-access-8jwgc\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.756238 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421d907a-c7b0-4109-8d01-e725459215b9-openstack-config\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.757436 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421d907a-c7b0-4109-8d01-e725459215b9-openstack-config\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.763810 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421d907a-c7b0-4109-8d01-e725459215b9-openstack-config-secret\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.766532 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421d907a-c7b0-4109-8d01-e725459215b9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.772006 4637 scope.go:117] "RemoveContainer" containerID="6267ece6d034c205ed970cf866360cfd9a252850edb50de41489c00c3cdafc85" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.779521 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jwgc\" (UniqueName: \"kubernetes.io/projected/421d907a-c7b0-4109-8d01-e725459215b9-kube-api-access-8jwgc\") pod \"openstackclient\" (UID: \"421d907a-c7b0-4109-8d01-e725459215b9\") " pod="openstack/openstackclient" Dec 01 15:04:46 crc kubenswrapper[4637]: I1201 15:04:46.930262 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.311483 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.431453 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s996n\" (UniqueName: \"kubernetes.io/projected/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-kube-api-access-s996n\") pod \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.431546 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-svc\") pod \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.431583 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-swift-storage-0\") pod \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.431653 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-sb\") pod \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.431759 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-nb\") pod \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.431846 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-config\") pod \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\" (UID: \"64c249a0-f00c-4b61-930d-e7b28a6a2e6b\") " Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.483347 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-kube-api-access-s996n" (OuterVolumeSpecName: "kube-api-access-s996n") pod "64c249a0-f00c-4b61-930d-e7b28a6a2e6b" (UID: "64c249a0-f00c-4b61-930d-e7b28a6a2e6b"). InnerVolumeSpecName "kube-api-access-s996n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.539135 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s996n\" (UniqueName: \"kubernetes.io/projected/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-kube-api-access-s996n\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.646234 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c95959774-tk5fr"] Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.677408 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" event={"ID":"64c249a0-f00c-4b61-930d-e7b28a6a2e6b","Type":"ContainerDied","Data":"de9fac7ad53fc1633224388e212109f835603e0830d6fa7e3b3e752c8e91134f"} Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.677456 4637 scope.go:117] "RemoveContainer" containerID="b530e9ff351b8f9ce04a1a3e2f42175ec632526cc906c7861a04417b1e10efb7" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.677712 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-sx8n2" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.785080 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/1.log" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.809256 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64c249a0-f00c-4b61-930d-e7b28a6a2e6b" (UID: "64c249a0-f00c-4b61-930d-e7b28a6a2e6b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.866571 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.942596 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerStarted","Data":"aa42fe3047e8d39dbb4caa91b253485b3d79bfaa9d48cdfe41701fcac8440e92"} Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.944012 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.956612 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64c249a0-f00c-4b61-930d-e7b28a6a2e6b" (UID: "64c249a0-f00c-4b61-930d-e7b28a6a2e6b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.971886 4637 scope.go:117] "RemoveContainer" containerID="0f1fbf1054ff2f30cfbba79d23b8ad1766f803d7723925037fb50ad0eeabfdd4" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.972991 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-config" (OuterVolumeSpecName: "config") pod "64c249a0-f00c-4b61-930d-e7b28a6a2e6b" (UID: "64c249a0-f00c-4b61-930d-e7b28a6a2e6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.975258 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:49 crc kubenswrapper[4637]: I1201 15:04:49.975384 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.029278 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64c249a0-f00c-4b61-930d-e7b28a6a2e6b" (UID: "64c249a0-f00c-4b61-930d-e7b28a6a2e6b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.047306 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64c249a0-f00c-4b61-930d-e7b28a6a2e6b" (UID: "64c249a0-f00c-4b61-930d-e7b28a6a2e6b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.082167 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.082221 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64c249a0-f00c-4b61-930d-e7b28a6a2e6b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.249117 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d5b77c96f-nz2mk"] Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.283088 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6fcc69568b-hmqt6"] Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.301212 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-6h6t8"] Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.322887 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7756bcbc94-flscs"] Dec 01 15:04:50 crc kubenswrapper[4637]: W1201 15:04:50.330307 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf391ef_d734_4a25_9726_c7254f4abc1a.slice/crio-c6e00dd1c2cc2977bb60a410f3b78af59dd525cb7a28a74c4451e641b443c094 WatchSource:0}: Error finding container c6e00dd1c2cc2977bb60a410f3b78af59dd525cb7a28a74c4451e641b443c094: Status 404 returned error can't find the container with id c6e00dd1c2cc2977bb60a410f3b78af59dd525cb7a28a74c4451e641b443c094 Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.338413 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 01 15:04:50 crc kubenswrapper[4637]: W1201 15:04:50.343382 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bda0b2e_3c55_4e3c_8ad1_1f8b631a2b3d.slice/crio-93d768b09977873400f490da25d586231d0be2e11ee57947190ef41f4ebbaeb1 WatchSource:0}: Error finding container 93d768b09977873400f490da25d586231d0be2e11ee57947190ef41f4ebbaeb1: Status 404 returned error can't find the container with id 93d768b09977873400f490da25d586231d0be2e11ee57947190ef41f4ebbaeb1 Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.351003 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-sx8n2"] Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.374330 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-sx8n2"] Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.841988 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"421d907a-c7b0-4109-8d01-e725459215b9","Type":"ContainerStarted","Data":"a21a05488edfdafb5a035cc0225dd138558b8c668c5018ece5acba2c033de9a2"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.851855 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7756bcbc94-flscs" event={"ID":"bdf391ef-d734-4a25-9726-c7254f4abc1a","Type":"ContainerStarted","Data":"dce303b4b49cde975dfdcc16ee67330e1a7299498e48ba58dfd56bbac51376b3"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.851914 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7756bcbc94-flscs" event={"ID":"bdf391ef-d734-4a25-9726-c7254f4abc1a","Type":"ContainerStarted","Data":"c6e00dd1c2cc2977bb60a410f3b78af59dd525cb7a28a74c4451e641b443c094"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.859143 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c95959774-tk5fr" event={"ID":"57082a3e-c5e1-4926-a5b1-306d0becae0c","Type":"ContainerStarted","Data":"d6ba30d4e76cf7f497d5efe95db036a5cfcb094ea13da91e352488c9323ea946"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.859201 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c95959774-tk5fr" event={"ID":"57082a3e-c5e1-4926-a5b1-306d0becae0c","Type":"ContainerStarted","Data":"7735092c4eae1bd5200c22e416cc8ea13d3b769495f22248ea9a2a1805c6053e"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.859215 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c95959774-tk5fr" event={"ID":"57082a3e-c5e1-4926-a5b1-306d0becae0c","Type":"ContainerStarted","Data":"1e6d51c9dfa46679cb829fd82c64e0f3735bac2bf84f6ba7fe2d5be0eb27cffd"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.860514 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.860540 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.872176 4637 generic.go:334] "Generic (PLEG): container finished" podID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" containerID="260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775" exitCode=0 Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.872252 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" event={"ID":"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d","Type":"ContainerDied","Data":"260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.872285 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" event={"ID":"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d","Type":"ContainerStarted","Data":"93d768b09977873400f490da25d586231d0be2e11ee57947190ef41f4ebbaeb1"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.879085 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" event={"ID":"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04","Type":"ContainerStarted","Data":"bccab611573d8e6ed7beab3478c98d323c140eddd033ec8ba6407eb292491a04"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.886377 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c95959774-tk5fr" podStartSLOduration=7.886362074 podStartE2EDuration="7.886362074s" podCreationTimestamp="2025-12-01 15:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:50.885471369 +0000 UTC m=+1141.403180197" watchObservedRunningTime="2025-12-01 15:04:50.886362074 +0000 UTC m=+1141.404070902" Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.893186 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b77c96f-nz2mk" event={"ID":"395c12b6-6b37-4ed6-93fb-65937fa99e65","Type":"ContainerStarted","Data":"11b9c94a837e213974375dab6034487c0b95a8e25c9a0d01e630acb1bda49401"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.963297 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vlwgj" event={"ID":"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d","Type":"ContainerStarted","Data":"7f132f75ca313511f7e2de9532e9d8538b2c463379d270be61033ae1bdc00b40"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.985736 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerStarted","Data":"7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251"} Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.985829 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="ceilometer-central-agent" containerID="cri-o://ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72" gracePeriod=30 Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.986153 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="proxy-httpd" containerID="cri-o://7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251" gracePeriod=30 Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.986207 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="sg-core" containerID="cri-o://74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507" gracePeriod=30 Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.986262 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="ceilometer-notification-agent" containerID="cri-o://7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a" gracePeriod=30 Dec 01 15:04:50 crc kubenswrapper[4637]: I1201 15:04:50.986264 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.000769 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vlwgj" podStartSLOduration=16.599395346 podStartE2EDuration="1m9.000740324s" podCreationTimestamp="2025-12-01 15:03:42 +0000 UTC" firstStartedPulling="2025-12-01 15:03:56.519308317 +0000 UTC m=+1087.037017135" lastFinishedPulling="2025-12-01 15:04:48.920653285 +0000 UTC m=+1139.438362113" observedRunningTime="2025-12-01 15:04:50.982044166 +0000 UTC m=+1141.499752994" watchObservedRunningTime="2025-12-01 15:04:51.000740324 +0000 UTC m=+1141.518449172" Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.005708 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/2.log" Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.011151 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/1.log" Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.014843 4637 generic.go:334] "Generic (PLEG): container finished" podID="4162adc0-1edf-4059-a924-ef743026eda4" containerID="aa42fe3047e8d39dbb4caa91b253485b3d79bfaa9d48cdfe41701fcac8440e92" exitCode=1 Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.014899 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerDied","Data":"aa42fe3047e8d39dbb4caa91b253485b3d79bfaa9d48cdfe41701fcac8440e92"} Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.014962 4637 scope.go:117] "RemoveContainer" containerID="6267ece6d034c205ed970cf866360cfd9a252850edb50de41489c00c3cdafc85" Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.016304 4637 scope.go:117] "RemoveContainer" containerID="aa42fe3047e8d39dbb4caa91b253485b3d79bfaa9d48cdfe41701fcac8440e92" Dec 01 15:04:51 crc kubenswrapper[4637]: E1201 15:04:51.016552 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-76ccd9588d-b65nb_openstack(4162adc0-1edf-4059-a924-ef743026eda4)\"" pod="openstack/neutron-76ccd9588d-b65nb" podUID="4162adc0-1edf-4059-a924-ef743026eda4" Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.058310 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.81044248 podStartE2EDuration="1m19.058282293s" podCreationTimestamp="2025-12-01 15:03:32 +0000 UTC" firstStartedPulling="2025-12-01 15:03:34.644480945 +0000 UTC m=+1065.162189763" lastFinishedPulling="2025-12-01 15:04:48.892320748 +0000 UTC m=+1139.410029576" observedRunningTime="2025-12-01 15:04:51.017012894 +0000 UTC m=+1141.534721712" watchObservedRunningTime="2025-12-01 15:04:51.058282293 +0000 UTC m=+1141.575991121" Dec 01 15:04:51 crc kubenswrapper[4637]: I1201 15:04:51.790484 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" path="/var/lib/kubelet/pods/64c249a0-f00c-4b61-930d-e7b28a6a2e6b/volumes" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.028783 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" event={"ID":"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d","Type":"ContainerStarted","Data":"a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2"} Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.028989 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.032696 4637 generic.go:334] "Generic (PLEG): container finished" podID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerID="7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251" exitCode=0 Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.032716 4637 generic.go:334] "Generic (PLEG): container finished" podID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerID="74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507" exitCode=2 Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.032724 4637 generic.go:334] "Generic (PLEG): container finished" podID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerID="ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72" exitCode=0 Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.032757 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerDied","Data":"7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251"} Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.032776 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerDied","Data":"74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507"} Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.032785 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerDied","Data":"ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72"} Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.037865 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/2.log" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.039523 4637 scope.go:117] "RemoveContainer" containerID="aa42fe3047e8d39dbb4caa91b253485b3d79bfaa9d48cdfe41701fcac8440e92" Dec 01 15:04:52 crc kubenswrapper[4637]: E1201 15:04:52.039795 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-76ccd9588d-b65nb_openstack(4162adc0-1edf-4059-a924-ef743026eda4)\"" pod="openstack/neutron-76ccd9588d-b65nb" podUID="4162adc0-1edf-4059-a924-ef743026eda4" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.042239 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7756bcbc94-flscs" event={"ID":"bdf391ef-d734-4a25-9726-c7254f4abc1a","Type":"ContainerStarted","Data":"fd402de5b7c0948863d9dcaf6ee7a38b36d87c1727ef90984d357a19fab97b43"} Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.042566 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.042633 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.056902 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" podStartSLOduration=13.056879333 podStartE2EDuration="13.056879333s" podCreationTimestamp="2025-12-01 15:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:52.052886515 +0000 UTC m=+1142.570595343" watchObservedRunningTime="2025-12-01 15:04:52.056879333 +0000 UTC m=+1142.574588161" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.104009 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7756bcbc94-flscs" podStartSLOduration=12.103988749 podStartE2EDuration="12.103988749s" podCreationTimestamp="2025-12-01 15:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:52.096150458 +0000 UTC m=+1142.613859286" watchObservedRunningTime="2025-12-01 15:04:52.103988749 +0000 UTC m=+1142.621697577" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.593999 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-84f489b6b7-wswv6"] Dec 01 15:04:52 crc kubenswrapper[4637]: E1201 15:04:52.594360 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerName="dnsmasq-dns" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.594376 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerName="dnsmasq-dns" Dec 01 15:04:52 crc kubenswrapper[4637]: E1201 15:04:52.594401 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerName="init" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.594407 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerName="init" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.594600 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c249a0-f00c-4b61-930d-e7b28a6a2e6b" containerName="dnsmasq-dns" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.595603 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.625640 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.627995 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.628308 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.645504 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84f489b6b7-wswv6"] Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.687824 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-combined-ca-bundle\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.687965 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-run-httpd\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.688009 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-etc-swift\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.688128 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-public-tls-certs\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.688220 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-internal-tls-certs\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.690389 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-config-data\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.690602 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-log-httpd\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.690623 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4vh\" (UniqueName: \"kubernetes.io/projected/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-kube-api-access-cq4vh\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.792467 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-config-data\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.792554 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-log-httpd\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.792581 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq4vh\" (UniqueName: \"kubernetes.io/projected/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-kube-api-access-cq4vh\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.792624 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-combined-ca-bundle\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.792653 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-run-httpd\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.792678 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-etc-swift\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.792723 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-public-tls-certs\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.792764 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-internal-tls-certs\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.795418 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-log-httpd\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.798023 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-run-httpd\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.800682 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-combined-ca-bundle\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.807095 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-etc-swift\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.807549 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-public-tls-certs\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.810546 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-config-data\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.821973 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-internal-tls-certs\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.822012 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq4vh\" (UniqueName: \"kubernetes.io/projected/f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da-kube-api-access-cq4vh\") pod \"swift-proxy-84f489b6b7-wswv6\" (UID: \"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da\") " pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:52 crc kubenswrapper[4637]: I1201 15:04:52.923418 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.028987 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.131541 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-run-httpd\") pod \"604935ee-aa8a-461e-9bd9-f11ad29128e0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.132046 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-combined-ca-bundle\") pod \"604935ee-aa8a-461e-9bd9-f11ad29128e0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.132114 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-log-httpd\") pod \"604935ee-aa8a-461e-9bd9-f11ad29128e0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.132138 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-sg-core-conf-yaml\") pod \"604935ee-aa8a-461e-9bd9-f11ad29128e0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.132269 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-config-data\") pod \"604935ee-aa8a-461e-9bd9-f11ad29128e0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.132339 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nsfv\" (UniqueName: \"kubernetes.io/projected/604935ee-aa8a-461e-9bd9-f11ad29128e0-kube-api-access-9nsfv\") pod \"604935ee-aa8a-461e-9bd9-f11ad29128e0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.132390 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-scripts\") pod \"604935ee-aa8a-461e-9bd9-f11ad29128e0\" (UID: \"604935ee-aa8a-461e-9bd9-f11ad29128e0\") " Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.134292 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "604935ee-aa8a-461e-9bd9-f11ad29128e0" (UID: "604935ee-aa8a-461e-9bd9-f11ad29128e0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.134543 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "604935ee-aa8a-461e-9bd9-f11ad29128e0" (UID: "604935ee-aa8a-461e-9bd9-f11ad29128e0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.147379 4637 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.147415 4637 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604935ee-aa8a-461e-9bd9-f11ad29128e0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.157783 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b77c96f-nz2mk" event={"ID":"395c12b6-6b37-4ed6-93fb-65937fa99e65","Type":"ContainerStarted","Data":"dcee1131fad596106f58f99f6b1ee2cb260fdb35ead64ee83f3c5a2f7547cc15"} Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.158126 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-scripts" (OuterVolumeSpecName: "scripts") pod "604935ee-aa8a-461e-9bd9-f11ad29128e0" (UID: "604935ee-aa8a-461e-9bd9-f11ad29128e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.183743 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604935ee-aa8a-461e-9bd9-f11ad29128e0-kube-api-access-9nsfv" (OuterVolumeSpecName: "kube-api-access-9nsfv") pod "604935ee-aa8a-461e-9bd9-f11ad29128e0" (UID: "604935ee-aa8a-461e-9bd9-f11ad29128e0"). InnerVolumeSpecName "kube-api-access-9nsfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.183903 4637 generic.go:334] "Generic (PLEG): container finished" podID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerID="7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a" exitCode=0 Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.184038 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.184725 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerDied","Data":"7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a"} Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.184788 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604935ee-aa8a-461e-9bd9-f11ad29128e0","Type":"ContainerDied","Data":"6e3c037800958b3e108ef40cf3e8cacf27545836826a57def6bb450382341103"} Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.184807 4637 scope.go:117] "RemoveContainer" containerID="7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.193733 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" event={"ID":"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04","Type":"ContainerStarted","Data":"06d271efae16cbd77475f1730cd1aadcf98ba5ee62d6bfcd9caed6ca13c84053"} Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.230427 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84f489b6b7-wswv6"] Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.249859 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nsfv\" (UniqueName: \"kubernetes.io/projected/604935ee-aa8a-461e-9bd9-f11ad29128e0-kube-api-access-9nsfv\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.249894 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.324465 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "604935ee-aa8a-461e-9bd9-f11ad29128e0" (UID: "604935ee-aa8a-461e-9bd9-f11ad29128e0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.351438 4637 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.396580 4637 scope.go:117] "RemoveContainer" containerID="74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.433255 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-config-data" (OuterVolumeSpecName: "config-data") pod "604935ee-aa8a-461e-9bd9-f11ad29128e0" (UID: "604935ee-aa8a-461e-9bd9-f11ad29128e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.434307 4637 scope.go:117] "RemoveContainer" containerID="7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.435258 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "604935ee-aa8a-461e-9bd9-f11ad29128e0" (UID: "604935ee-aa8a-461e-9bd9-f11ad29128e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.453592 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.453646 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604935ee-aa8a-461e-9bd9-f11ad29128e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.478461 4637 scope.go:117] "RemoveContainer" containerID="ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.533590 4637 scope.go:117] "RemoveContainer" containerID="7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251" Dec 01 15:04:54 crc kubenswrapper[4637]: E1201 15:04:54.540110 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251\": container with ID starting with 7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251 not found: ID does not exist" containerID="7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.540183 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251"} err="failed to get container status \"7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251\": rpc error: code = NotFound desc = could not find container \"7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251\": container with ID starting with 7ac479468ed3b8de444e6ff064046dc11a7436dfd1beb747a1c45a4e9f95f251 not found: ID does not exist" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.540213 4637 scope.go:117] "RemoveContainer" containerID="74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.543593 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:04:54 crc kubenswrapper[4637]: E1201 15:04:54.549177 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507\": container with ID starting with 74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507 not found: ID does not exist" containerID="74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.549234 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507"} err="failed to get container status \"74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507\": rpc error: code = NotFound desc = could not find container \"74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507\": container with ID starting with 74d2f4c7d1a4db87556c04b69d2b4c97f30639aa39376d76ec8f8c01b8a23507 not found: ID does not exist" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.549264 4637 scope.go:117] "RemoveContainer" containerID="7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a" Dec 01 15:04:54 crc kubenswrapper[4637]: E1201 15:04:54.554127 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a\": container with ID starting with 7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a not found: ID does not exist" containerID="7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.554190 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a"} err="failed to get container status \"7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a\": rpc error: code = NotFound desc = could not find container \"7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a\": container with ID starting with 7f9c7cf675e28d17e863b7f77c787446193ef11f2ab1e4521ae4276018cd005a not found: ID does not exist" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.554221 4637 scope.go:117] "RemoveContainer" containerID="ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72" Dec 01 15:04:54 crc kubenswrapper[4637]: E1201 15:04:54.555570 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72\": container with ID starting with ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72 not found: ID does not exist" containerID="ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.555601 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72"} err="failed to get container status \"ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72\": rpc error: code = NotFound desc = could not find container \"ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72\": container with ID starting with ea66906f740f05d232e3969f5a3f2e8f2ca61be689ee3d1c2d22bc42dcb03a72 not found: ID does not exist" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.560343 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.582415 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:04:54 crc kubenswrapper[4637]: E1201 15:04:54.583019 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="ceilometer-central-agent" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.583044 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="ceilometer-central-agent" Dec 01 15:04:54 crc kubenswrapper[4637]: E1201 15:04:54.583093 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="sg-core" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.583100 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="sg-core" Dec 01 15:04:54 crc kubenswrapper[4637]: E1201 15:04:54.583117 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="proxy-httpd" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.583124 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="proxy-httpd" Dec 01 15:04:54 crc kubenswrapper[4637]: E1201 15:04:54.583138 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="ceilometer-notification-agent" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.583148 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="ceilometer-notification-agent" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.583334 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="sg-core" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.583377 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="ceilometer-central-agent" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.583393 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="ceilometer-notification-agent" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.583406 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" containerName="proxy-httpd" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.585357 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.589663 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.589793 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.599338 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.664897 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.665014 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.665103 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-scripts\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.665158 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9xb\" (UniqueName: \"kubernetes.io/projected/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-kube-api-access-sm9xb\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.665196 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-config-data\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.665226 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-run-httpd\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.665269 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-log-httpd\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.766726 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-scripts\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.766830 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9xb\" (UniqueName: \"kubernetes.io/projected/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-kube-api-access-sm9xb\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.766877 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-config-data\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.766902 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-run-httpd\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.766966 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-log-httpd\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.767022 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.767094 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.768533 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-run-httpd\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.769478 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-log-httpd\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.771071 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.771764 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-config-data\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.771789 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.772316 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-scripts\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.799515 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9xb\" (UniqueName: \"kubernetes.io/projected/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-kube-api-access-sm9xb\") pod \"ceilometer-0\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " pod="openstack/ceilometer-0" Dec 01 15:04:54 crc kubenswrapper[4637]: I1201 15:04:54.925868 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.210274 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" event={"ID":"4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04","Type":"ContainerStarted","Data":"48db4318e9c39d083aaf7cbb67ca7e48c627f99ad2a05577e4f111ddc5375213"} Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.215903 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84f489b6b7-wswv6" event={"ID":"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da","Type":"ContainerStarted","Data":"e2baddf7c4a54e658f786f6ea70943a4c7fe600db40aaaa90f0e26615581a087"} Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.215967 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84f489b6b7-wswv6" event={"ID":"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da","Type":"ContainerStarted","Data":"9f46fecbd6f62a4f46480fed2b95d60198419797c48ab4af0b201abb1df7e0cb"} Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.215981 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84f489b6b7-wswv6" event={"ID":"f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da","Type":"ContainerStarted","Data":"ae7b2cf4bfecc3c212212e10ec8d0bb11b7b2789f3f56db7528bdb7a460d4d14"} Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.216113 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.219623 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b77c96f-nz2mk" event={"ID":"395c12b6-6b37-4ed6-93fb-65937fa99e65","Type":"ContainerStarted","Data":"a3962d7efaeaef2d9aa855c2c9945140404e3bd686a5753e8dc6cc12055c88c7"} Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.270909 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6fcc69568b-hmqt6" podStartSLOduration=13.15453847 podStartE2EDuration="16.270884479s" podCreationTimestamp="2025-12-01 15:04:39 +0000 UTC" firstStartedPulling="2025-12-01 15:04:50.300077876 +0000 UTC m=+1140.817786704" lastFinishedPulling="2025-12-01 15:04:53.416423885 +0000 UTC m=+1143.934132713" observedRunningTime="2025-12-01 15:04:55.255548903 +0000 UTC m=+1145.773257731" watchObservedRunningTime="2025-12-01 15:04:55.270884479 +0000 UTC m=+1145.788593307" Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.305076 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d5b77c96f-nz2mk" podStartSLOduration=13.148420325 podStartE2EDuration="16.305057526s" podCreationTimestamp="2025-12-01 15:04:39 +0000 UTC" firstStartedPulling="2025-12-01 15:04:50.267679528 +0000 UTC m=+1140.785388356" lastFinishedPulling="2025-12-01 15:04:53.424316729 +0000 UTC m=+1143.942025557" observedRunningTime="2025-12-01 15:04:55.29082482 +0000 UTC m=+1145.808533648" watchObservedRunningTime="2025-12-01 15:04:55.305057526 +0000 UTC m=+1145.822766344" Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.321189 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-84f489b6b7-wswv6" podStartSLOduration=3.321167742 podStartE2EDuration="3.321167742s" podCreationTimestamp="2025-12-01 15:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:04:55.311647083 +0000 UTC m=+1145.829355911" watchObservedRunningTime="2025-12-01 15:04:55.321167742 +0000 UTC m=+1145.838876570" Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.612914 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:04:55 crc kubenswrapper[4637]: W1201 15:04:55.647430 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod723c6c14_68e1_4e4e_9858_8fdfb95cfdb2.slice/crio-71edfc29cd9391565cff4cf9409ab2687647839ff50137fcb0a859d04c370ca9 WatchSource:0}: Error finding container 71edfc29cd9391565cff4cf9409ab2687647839ff50137fcb0a859d04c370ca9: Status 404 returned error can't find the container with id 71edfc29cd9391565cff4cf9409ab2687647839ff50137fcb0a859d04c370ca9 Dec 01 15:04:55 crc kubenswrapper[4637]: I1201 15:04:55.786089 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604935ee-aa8a-461e-9bd9-f11ad29128e0" path="/var/lib/kubelet/pods/604935ee-aa8a-461e-9bd9-f11ad29128e0/volumes" Dec 01 15:04:56 crc kubenswrapper[4637]: I1201 15:04:56.244830 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerStarted","Data":"71edfc29cd9391565cff4cf9409ab2687647839ff50137fcb0a859d04c370ca9"} Dec 01 15:04:56 crc kubenswrapper[4637]: I1201 15:04:56.245771 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:04:56 crc kubenswrapper[4637]: I1201 15:04:56.496115 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64585bdddb-h9hvw" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 01 15:04:56 crc kubenswrapper[4637]: I1201 15:04:56.496365 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:04:57 crc kubenswrapper[4637]: I1201 15:04:57.283336 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerStarted","Data":"e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590"} Dec 01 15:04:58 crc kubenswrapper[4637]: I1201 15:04:58.666257 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerStarted","Data":"6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500"} Dec 01 15:04:59 crc kubenswrapper[4637]: I1201 15:04:59.794427 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:04:59 crc kubenswrapper[4637]: I1201 15:04:59.810210 4637 scope.go:117] "RemoveContainer" containerID="aa42fe3047e8d39dbb4caa91b253485b3d79bfaa9d48cdfe41701fcac8440e92" Dec 01 15:04:59 crc kubenswrapper[4637]: I1201 15:04:59.810540 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-76ccd9588d-b65nb" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Dec 01 15:04:59 crc kubenswrapper[4637]: E1201 15:04:59.834184 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-76ccd9588d-b65nb_openstack(4162adc0-1edf-4059-a924-ef743026eda4)\"" pod="openstack/neutron-76ccd9588d-b65nb" podUID="4162adc0-1edf-4059-a924-ef743026eda4" Dec 01 15:04:59 crc kubenswrapper[4637]: I1201 15:04:59.898194 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:05:00 crc kubenswrapper[4637]: I1201 15:05:00.163759 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:05:00 crc kubenswrapper[4637]: I1201 15:05:00.422126 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:05:00 crc kubenswrapper[4637]: I1201 15:05:00.536109 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-tzb4m"] Dec 01 15:05:00 crc kubenswrapper[4637]: I1201 15:05:00.536398 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" podUID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" containerName="dnsmasq-dns" containerID="cri-o://0f52dcffa9a8750f7ef665b85d1d446f4bf30124683f5e0129448c3f6fb720c4" gracePeriod=10 Dec 01 15:05:00 crc kubenswrapper[4637]: I1201 15:05:00.740874 4637 generic.go:334] "Generic (PLEG): container finished" podID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" containerID="0f52dcffa9a8750f7ef665b85d1d446f4bf30124683f5e0129448c3f6fb720c4" exitCode=0 Dec 01 15:05:00 crc kubenswrapper[4637]: I1201 15:05:00.741262 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" event={"ID":"2fbd0319-a7c4-4fa8-928c-50ba5dab6777","Type":"ContainerDied","Data":"0f52dcffa9a8750f7ef665b85d1d446f4bf30124683f5e0129448c3f6fb720c4"} Dec 01 15:05:00 crc kubenswrapper[4637]: I1201 15:05:00.744133 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerStarted","Data":"fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4"} Dec 01 15:05:00 crc kubenswrapper[4637]: I1201 15:05:00.919358 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7c95959774-tk5fr" podUID="57082a3e-c5e1-4926-a5b1-306d0becae0c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.434061 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.484269 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-kube-api-access-4ggdv\") pod \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.484351 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-sb\") pod \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.484381 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-nb\") pod \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.484409 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-swift-storage-0\") pod \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.484452 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-config\") pod \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.484594 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-svc\") pod \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\" (UID: \"2fbd0319-a7c4-4fa8-928c-50ba5dab6777\") " Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.519180 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-kube-api-access-4ggdv" (OuterVolumeSpecName: "kube-api-access-4ggdv") pod "2fbd0319-a7c4-4fa8-928c-50ba5dab6777" (UID: "2fbd0319-a7c4-4fa8-928c-50ba5dab6777"). InnerVolumeSpecName "kube-api-access-4ggdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.593810 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ggdv\" (UniqueName: \"kubernetes.io/projected/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-kube-api-access-4ggdv\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.702276 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2fbd0319-a7c4-4fa8-928c-50ba5dab6777" (UID: "2fbd0319-a7c4-4fa8-928c-50ba5dab6777"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.721169 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2fbd0319-a7c4-4fa8-928c-50ba5dab6777" (UID: "2fbd0319-a7c4-4fa8-928c-50ba5dab6777"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.727823 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2fbd0319-a7c4-4fa8-928c-50ba5dab6777" (UID: "2fbd0319-a7c4-4fa8-928c-50ba5dab6777"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.750870 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2fbd0319-a7c4-4fa8-928c-50ba5dab6777" (UID: "2fbd0319-a7c4-4fa8-928c-50ba5dab6777"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.779313 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.789359 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-config" (OuterVolumeSpecName: "config") pod "2fbd0319-a7c4-4fa8-928c-50ba5dab6777" (UID: "2fbd0319-a7c4-4fa8-928c-50ba5dab6777"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.793990 4637 generic.go:334] "Generic (PLEG): container finished" podID="75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" containerID="7f132f75ca313511f7e2de9532e9d8538b2c463379d270be61033ae1bdc00b40" exitCode=0 Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.800783 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.800810 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.800820 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.800830 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.800841 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fbd0319-a7c4-4fa8-928c-50ba5dab6777-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.809374 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-tzb4m" event={"ID":"2fbd0319-a7c4-4fa8-928c-50ba5dab6777","Type":"ContainerDied","Data":"17cad55f97b288f02de16bf189c665ecb42c07aeb2e5778577cdca723e150861"} Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.809602 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vlwgj" event={"ID":"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d","Type":"ContainerDied","Data":"7f132f75ca313511f7e2de9532e9d8538b2c463379d270be61033ae1bdc00b40"} Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.812092 4637 scope.go:117] "RemoveContainer" containerID="0f52dcffa9a8750f7ef665b85d1d446f4bf30124683f5e0129448c3f6fb720c4" Dec 01 15:05:01 crc kubenswrapper[4637]: I1201 15:05:01.901165 4637 scope.go:117] "RemoveContainer" containerID="2fd5ab09855d1bf5d9a25bbbddf8692eee529e45aed52c929677508196da3b8d" Dec 01 15:05:02 crc kubenswrapper[4637]: I1201 15:05:02.107725 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-tzb4m"] Dec 01 15:05:02 crc kubenswrapper[4637]: I1201 15:05:02.152867 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-tzb4m"] Dec 01 15:05:02 crc kubenswrapper[4637]: I1201 15:05:02.439130 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-ff56c879c-9gwf6" Dec 01 15:05:02 crc kubenswrapper[4637]: I1201 15:05:02.522360 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76ccd9588d-b65nb"] Dec 01 15:05:02 crc kubenswrapper[4637]: I1201 15:05:02.529394 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76ccd9588d-b65nb" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-api" containerID="cri-o://08f5f48d30b8b32e703127f1a7e1645a00d1778edeb91de396ceebe17595ff4f" gracePeriod=30 Dec 01 15:05:03 crc kubenswrapper[4637]: I1201 15:05:03.363795 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:05:03 crc kubenswrapper[4637]: I1201 15:05:03.446760 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84f489b6b7-wswv6" Dec 01 15:05:03 crc kubenswrapper[4637]: I1201 15:05:03.786161 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" path="/var/lib/kubelet/pods/2fbd0319-a7c4-4fa8-928c-50ba5dab6777/volumes" Dec 01 15:05:03 crc kubenswrapper[4637]: I1201 15:05:03.847996 4637 generic.go:334] "Generic (PLEG): container finished" podID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerID="dc893b69f071bb1a0f4f2bb759966b8e137cd8b71a954090eda6934ed381e989" exitCode=137 Dec 01 15:05:03 crc kubenswrapper[4637]: I1201 15:05:03.848910 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64585bdddb-h9hvw" event={"ID":"29e960b7-8574-4c38-bb22-67f5a77aaca6","Type":"ContainerDied","Data":"dc893b69f071bb1a0f4f2bb759966b8e137cd8b71a954090eda6934ed381e989"} Dec 01 15:05:03 crc kubenswrapper[4637]: I1201 15:05:03.920189 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c95959774-tk5fr" podUID="57082a3e-c5e1-4926-a5b1-306d0becae0c" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:05:03 crc kubenswrapper[4637]: I1201 15:05:03.920279 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c95959774-tk5fr" podUID="57082a3e-c5e1-4926-a5b1-306d0becae0c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:05:04 crc kubenswrapper[4637]: I1201 15:05:04.549298 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7756bcbc94-flscs" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 15:05:05 crc kubenswrapper[4637]: I1201 15:05:05.025113 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7c95959774-tk5fr" podUID="57082a3e-c5e1-4926-a5b1-306d0becae0c" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:05:05 crc kubenswrapper[4637]: I1201 15:05:05.467215 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7756bcbc94-flscs" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 15:05:05 crc kubenswrapper[4637]: I1201 15:05:05.576327 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:05 crc kubenswrapper[4637]: I1201 15:05:05.924095 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7c95959774-tk5fr" podUID="57082a3e-c5e1-4926-a5b1-306d0becae0c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:05:06 crc kubenswrapper[4637]: I1201 15:05:06.500607 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64585bdddb-h9hvw" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 01 15:05:06 crc kubenswrapper[4637]: I1201 15:05:06.671681 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:05:08 crc kubenswrapper[4637]: I1201 15:05:08.592819 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c95959774-tk5fr" Dec 01 15:05:08 crc kubenswrapper[4637]: I1201 15:05:08.671484 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7756bcbc94-flscs"] Dec 01 15:05:08 crc kubenswrapper[4637]: I1201 15:05:08.671859 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7756bcbc94-flscs" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api-log" containerID="cri-o://dce303b4b49cde975dfdcc16ee67330e1a7299498e48ba58dfd56bbac51376b3" gracePeriod=30 Dec 01 15:05:08 crc kubenswrapper[4637]: I1201 15:05:08.672398 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7756bcbc94-flscs" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api" containerID="cri-o://fd402de5b7c0948863d9dcaf6ee7a38b36d87c1727ef90984d357a19fab97b43" gracePeriod=30 Dec 01 15:05:08 crc kubenswrapper[4637]: I1201 15:05:08.936573 4637 generic.go:334] "Generic (PLEG): container finished" podID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerID="dce303b4b49cde975dfdcc16ee67330e1a7299498e48ba58dfd56bbac51376b3" exitCode=143 Dec 01 15:05:08 crc kubenswrapper[4637]: I1201 15:05:08.936632 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7756bcbc94-flscs" event={"ID":"bdf391ef-d734-4a25-9726-c7254f4abc1a","Type":"ContainerDied","Data":"dce303b4b49cde975dfdcc16ee67330e1a7299498e48ba58dfd56bbac51376b3"} Dec 01 15:05:10 crc kubenswrapper[4637]: E1201 15:05:10.635631 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4162adc0_1edf_4059_a924_ef743026eda4.slice/crio-08f5f48d30b8b32e703127f1a7e1645a00d1778edeb91de396ceebe17595ff4f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4162adc0_1edf_4059_a924_ef743026eda4.slice/crio-conmon-08f5f48d30b8b32e703127f1a7e1645a00d1778edeb91de396ceebe17595ff4f.scope\": RecentStats: unable to find data in memory cache]" Dec 01 15:05:10 crc kubenswrapper[4637]: I1201 15:05:10.968174 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/2.log" Dec 01 15:05:10 crc kubenswrapper[4637]: I1201 15:05:10.969590 4637 generic.go:334] "Generic (PLEG): container finished" podID="4162adc0-1edf-4059-a924-ef743026eda4" containerID="08f5f48d30b8b32e703127f1a7e1645a00d1778edeb91de396ceebe17595ff4f" exitCode=0 Dec 01 15:05:10 crc kubenswrapper[4637]: I1201 15:05:10.969628 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerDied","Data":"08f5f48d30b8b32e703127f1a7e1645a00d1778edeb91de396ceebe17595ff4f"} Dec 01 15:05:11 crc kubenswrapper[4637]: I1201 15:05:11.945376 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-g96w2"] Dec 01 15:05:11 crc kubenswrapper[4637]: E1201 15:05:11.945714 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" containerName="init" Dec 01 15:05:11 crc kubenswrapper[4637]: I1201 15:05:11.945727 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" containerName="init" Dec 01 15:05:11 crc kubenswrapper[4637]: E1201 15:05:11.945740 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" containerName="dnsmasq-dns" Dec 01 15:05:11 crc kubenswrapper[4637]: I1201 15:05:11.945747 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" containerName="dnsmasq-dns" Dec 01 15:05:11 crc kubenswrapper[4637]: I1201 15:05:11.945979 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbd0319-a7c4-4fa8-928c-50ba5dab6777" containerName="dnsmasq-dns" Dec 01 15:05:11 crc kubenswrapper[4637]: I1201 15:05:11.946536 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g96w2" Dec 01 15:05:11 crc kubenswrapper[4637]: I1201 15:05:11.977739 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j249\" (UniqueName: \"kubernetes.io/projected/7e6ab34a-72f6-4908-a148-e08b308afb1a-kube-api-access-7j249\") pod \"nova-api-db-create-g96w2\" (UID: \"7e6ab34a-72f6-4908-a148-e08b308afb1a\") " pod="openstack/nova-api-db-create-g96w2" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.019865 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g96w2"] Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.080840 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j249\" (UniqueName: \"kubernetes.io/projected/7e6ab34a-72f6-4908-a148-e08b308afb1a-kube-api-access-7j249\") pod \"nova-api-db-create-g96w2\" (UID: \"7e6ab34a-72f6-4908-a148-e08b308afb1a\") " pod="openstack/nova-api-db-create-g96w2" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.137129 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j249\" (UniqueName: \"kubernetes.io/projected/7e6ab34a-72f6-4908-a148-e08b308afb1a-kube-api-access-7j249\") pod \"nova-api-db-create-g96w2\" (UID: \"7e6ab34a-72f6-4908-a148-e08b308afb1a\") " pod="openstack/nova-api-db-create-g96w2" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.145460 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6k8ct"] Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.153493 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6k8ct" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.184898 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9lmn\" (UniqueName: \"kubernetes.io/projected/44eb4d02-ab26-4592-9b34-698fe46a8c51-kube-api-access-g9lmn\") pod \"nova-cell0-db-create-6k8ct\" (UID: \"44eb4d02-ab26-4592-9b34-698fe46a8c51\") " pod="openstack/nova-cell0-db-create-6k8ct" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.189437 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6k8ct"] Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.191035 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7756bcbc94-flscs" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:59686->10.217.0.160:9311: read: connection reset by peer" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.191460 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7756bcbc94-flscs" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:59670->10.217.0.160:9311: read: connection reset by peer" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.246117 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-25b5n"] Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.254709 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-25b5n" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.269666 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g96w2" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.287990 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pj7h\" (UniqueName: \"kubernetes.io/projected/1ceb774a-fe03-4b01-a371-d0a8b10e6b3d-kube-api-access-6pj7h\") pod \"nova-cell1-db-create-25b5n\" (UID: \"1ceb774a-fe03-4b01-a371-d0a8b10e6b3d\") " pod="openstack/nova-cell1-db-create-25b5n" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.288054 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9lmn\" (UniqueName: \"kubernetes.io/projected/44eb4d02-ab26-4592-9b34-698fe46a8c51-kube-api-access-g9lmn\") pod \"nova-cell0-db-create-6k8ct\" (UID: \"44eb4d02-ab26-4592-9b34-698fe46a8c51\") " pod="openstack/nova-cell0-db-create-6k8ct" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.313056 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-25b5n"] Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.363772 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9lmn\" (UniqueName: \"kubernetes.io/projected/44eb4d02-ab26-4592-9b34-698fe46a8c51-kube-api-access-g9lmn\") pod \"nova-cell0-db-create-6k8ct\" (UID: \"44eb4d02-ab26-4592-9b34-698fe46a8c51\") " pod="openstack/nova-cell0-db-create-6k8ct" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.390948 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pj7h\" (UniqueName: \"kubernetes.io/projected/1ceb774a-fe03-4b01-a371-d0a8b10e6b3d-kube-api-access-6pj7h\") pod \"nova-cell1-db-create-25b5n\" (UID: \"1ceb774a-fe03-4b01-a371-d0a8b10e6b3d\") " pod="openstack/nova-cell1-db-create-25b5n" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.428718 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pj7h\" (UniqueName: \"kubernetes.io/projected/1ceb774a-fe03-4b01-a371-d0a8b10e6b3d-kube-api-access-6pj7h\") pod \"nova-cell1-db-create-25b5n\" (UID: \"1ceb774a-fe03-4b01-a371-d0a8b10e6b3d\") " pod="openstack/nova-cell1-db-create-25b5n" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.547311 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6k8ct" Dec 01 15:05:12 crc kubenswrapper[4637]: I1201 15:05:12.581307 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-25b5n" Dec 01 15:05:13 crc kubenswrapper[4637]: I1201 15:05:13.001006 4637 generic.go:334] "Generic (PLEG): container finished" podID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerID="fd402de5b7c0948863d9dcaf6ee7a38b36d87c1727ef90984d357a19fab97b43" exitCode=0 Dec 01 15:05:13 crc kubenswrapper[4637]: I1201 15:05:13.001046 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7756bcbc94-flscs" event={"ID":"bdf391ef-d734-4a25-9726-c7254f4abc1a","Type":"ContainerDied","Data":"fd402de5b7c0948863d9dcaf6ee7a38b36d87c1727ef90984d357a19fab97b43"} Dec 01 15:05:13 crc kubenswrapper[4637]: E1201 15:05:13.762074 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 01 15:05:13 crc kubenswrapper[4637]: E1201 15:05:13.762617 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8fh686h659h68bh5c5h585h598h8h66dh58fh588hb9h59h57h698h57fh574h8fh5c9h56dh58bh574hf7hc6h5f8h689hdfh5d9h58hf5h548h57cq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jwgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(421d907a-c7b0-4109-8d01-e725459215b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:05:13 crc kubenswrapper[4637]: E1201 15:05:13.763826 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="421d907a-c7b0-4109-8d01-e725459215b9" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.033056 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vlwgj" event={"ID":"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d","Type":"ContainerDied","Data":"f45e5426049548ad70c3c43ba2139988f04b7cb01db487822d5e69361ea8273c"} Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.033314 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f45e5426049548ad70c3c43ba2139988f04b7cb01db487822d5e69361ea8273c" Dec 01 15:05:14 crc kubenswrapper[4637]: E1201 15:05:14.034780 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="421d907a-c7b0-4109-8d01-e725459215b9" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.103627 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.139401 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-db-sync-config-data\") pod \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.139805 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc85c\" (UniqueName: \"kubernetes.io/projected/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-kube-api-access-tc85c\") pod \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.139911 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-config-data\") pod \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.139976 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-combined-ca-bundle\") pod \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.140009 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-scripts\") pod \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.140049 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-etc-machine-id\") pod \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\" (UID: \"75ffcaf0-9c6e-4f8e-98a0-b9c44529527d\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.145047 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" (UID: "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.158356 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-scripts" (OuterVolumeSpecName: "scripts") pod "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" (UID: "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.165019 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-kube-api-access-tc85c" (OuterVolumeSpecName: "kube-api-access-tc85c") pod "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" (UID: "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d"). InnerVolumeSpecName "kube-api-access-tc85c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.165436 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" (UID: "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.228475 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" (UID: "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.241834 4637 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.241874 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc85c\" (UniqueName: \"kubernetes.io/projected/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-kube-api-access-tc85c\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.241886 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.241895 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.241905 4637 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.290057 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-config-data" (OuterVolumeSpecName: "config-data") pod "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" (UID: "75ffcaf0-9c6e-4f8e-98a0-b9c44529527d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.344807 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.347074 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.449358 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-scripts\") pod \"29e960b7-8574-4c38-bb22-67f5a77aaca6\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.449751 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-secret-key\") pod \"29e960b7-8574-4c38-bb22-67f5a77aaca6\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.449784 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-tls-certs\") pod \"29e960b7-8574-4c38-bb22-67f5a77aaca6\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.449873 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-config-data\") pod \"29e960b7-8574-4c38-bb22-67f5a77aaca6\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.449970 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-combined-ca-bundle\") pod \"29e960b7-8574-4c38-bb22-67f5a77aaca6\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.449997 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkkpl\" (UniqueName: \"kubernetes.io/projected/29e960b7-8574-4c38-bb22-67f5a77aaca6-kube-api-access-qkkpl\") pod \"29e960b7-8574-4c38-bb22-67f5a77aaca6\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.452215 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e960b7-8574-4c38-bb22-67f5a77aaca6-logs\") pod \"29e960b7-8574-4c38-bb22-67f5a77aaca6\" (UID: \"29e960b7-8574-4c38-bb22-67f5a77aaca6\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.454825 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e960b7-8574-4c38-bb22-67f5a77aaca6-logs" (OuterVolumeSpecName: "logs") pod "29e960b7-8574-4c38-bb22-67f5a77aaca6" (UID: "29e960b7-8574-4c38-bb22-67f5a77aaca6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.480659 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "29e960b7-8574-4c38-bb22-67f5a77aaca6" (UID: "29e960b7-8574-4c38-bb22-67f5a77aaca6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.508435 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e960b7-8574-4c38-bb22-67f5a77aaca6-kube-api-access-qkkpl" (OuterVolumeSpecName: "kube-api-access-qkkpl") pod "29e960b7-8574-4c38-bb22-67f5a77aaca6" (UID: "29e960b7-8574-4c38-bb22-67f5a77aaca6"). InnerVolumeSpecName "kube-api-access-qkkpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.522860 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29e960b7-8574-4c38-bb22-67f5a77aaca6" (UID: "29e960b7-8574-4c38-bb22-67f5a77aaca6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.551094 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-scripts" (OuterVolumeSpecName: "scripts") pod "29e960b7-8574-4c38-bb22-67f5a77aaca6" (UID: "29e960b7-8574-4c38-bb22-67f5a77aaca6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.556211 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e960b7-8574-4c38-bb22-67f5a77aaca6-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.556249 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.556262 4637 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.556277 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.556289 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkkpl\" (UniqueName: \"kubernetes.io/projected/29e960b7-8574-4c38-bb22-67f5a77aaca6-kube-api-access-qkkpl\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.593101 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "29e960b7-8574-4c38-bb22-67f5a77aaca6" (UID: "29e960b7-8574-4c38-bb22-67f5a77aaca6"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.604272 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-config-data" (OuterVolumeSpecName: "config-data") pod "29e960b7-8574-4c38-bb22-67f5a77aaca6" (UID: "29e960b7-8574-4c38-bb22-67f5a77aaca6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.620078 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.620263 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/2.log" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.620759 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657075 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxsz5\" (UniqueName: \"kubernetes.io/projected/bdf391ef-d734-4a25-9726-c7254f4abc1a-kube-api-access-xxsz5\") pod \"bdf391ef-d734-4a25-9726-c7254f4abc1a\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657133 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-httpd-config\") pod \"4162adc0-1edf-4059-a924-ef743026eda4\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657181 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-combined-ca-bundle\") pod \"4162adc0-1edf-4059-a924-ef743026eda4\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657241 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data\") pod \"bdf391ef-d734-4a25-9726-c7254f4abc1a\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657265 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-config\") pod \"4162adc0-1edf-4059-a924-ef743026eda4\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657298 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data-custom\") pod \"bdf391ef-d734-4a25-9726-c7254f4abc1a\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657332 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmq7c\" (UniqueName: \"kubernetes.io/projected/4162adc0-1edf-4059-a924-ef743026eda4-kube-api-access-qmq7c\") pod \"4162adc0-1edf-4059-a924-ef743026eda4\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657471 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-combined-ca-bundle\") pod \"bdf391ef-d734-4a25-9726-c7254f4abc1a\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657531 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf391ef-d734-4a25-9726-c7254f4abc1a-logs\") pod \"bdf391ef-d734-4a25-9726-c7254f4abc1a\" (UID: \"bdf391ef-d734-4a25-9726-c7254f4abc1a\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.657587 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-ovndb-tls-certs\") pod \"4162adc0-1edf-4059-a924-ef743026eda4\" (UID: \"4162adc0-1edf-4059-a924-ef743026eda4\") " Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.658048 4637 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e960b7-8574-4c38-bb22-67f5a77aaca6-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.658067 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29e960b7-8574-4c38-bb22-67f5a77aaca6-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.665613 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4162adc0-1edf-4059-a924-ef743026eda4" (UID: "4162adc0-1edf-4059-a924-ef743026eda4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.676084 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4162adc0-1edf-4059-a924-ef743026eda4-kube-api-access-qmq7c" (OuterVolumeSpecName: "kube-api-access-qmq7c") pod "4162adc0-1edf-4059-a924-ef743026eda4" (UID: "4162adc0-1edf-4059-a924-ef743026eda4"). InnerVolumeSpecName "kube-api-access-qmq7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.676570 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf391ef-d734-4a25-9726-c7254f4abc1a-logs" (OuterVolumeSpecName: "logs") pod "bdf391ef-d734-4a25-9726-c7254f4abc1a" (UID: "bdf391ef-d734-4a25-9726-c7254f4abc1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.688450 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf391ef-d734-4a25-9726-c7254f4abc1a-kube-api-access-xxsz5" (OuterVolumeSpecName: "kube-api-access-xxsz5") pod "bdf391ef-d734-4a25-9726-c7254f4abc1a" (UID: "bdf391ef-d734-4a25-9726-c7254f4abc1a"). InnerVolumeSpecName "kube-api-access-xxsz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.719643 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bdf391ef-d734-4a25-9726-c7254f4abc1a" (UID: "bdf391ef-d734-4a25-9726-c7254f4abc1a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.753688 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-25b5n"] Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.760350 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdf391ef-d734-4a25-9726-c7254f4abc1a-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.760384 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxsz5\" (UniqueName: \"kubernetes.io/projected/bdf391ef-d734-4a25-9726-c7254f4abc1a-kube-api-access-xxsz5\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.760398 4637 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.760408 4637 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.760425 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmq7c\" (UniqueName: \"kubernetes.io/projected/4162adc0-1edf-4059-a924-ef743026eda4-kube-api-access-qmq7c\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.765203 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6k8ct"] Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.776140 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g96w2"] Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.779365 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdf391ef-d734-4a25-9726-c7254f4abc1a" (UID: "bdf391ef-d734-4a25-9726-c7254f4abc1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.862383 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.863801 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-config" (OuterVolumeSpecName: "config") pod "4162adc0-1edf-4059-a924-ef743026eda4" (UID: "4162adc0-1edf-4059-a924-ef743026eda4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.931921 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data" (OuterVolumeSpecName: "config-data") pod "bdf391ef-d734-4a25-9726-c7254f4abc1a" (UID: "bdf391ef-d734-4a25-9726-c7254f4abc1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.940450 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4162adc0-1edf-4059-a924-ef743026eda4" (UID: "4162adc0-1edf-4059-a924-ef743026eda4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.964217 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.964255 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.964266 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf391ef-d734-4a25-9726-c7254f4abc1a-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:14 crc kubenswrapper[4637]: I1201 15:05:14.992115 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4162adc0-1edf-4059-a924-ef743026eda4" (UID: "4162adc0-1edf-4059-a924-ef743026eda4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.049041 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g96w2" event={"ID":"7e6ab34a-72f6-4908-a148-e08b308afb1a","Type":"ContainerStarted","Data":"d7ae8df910b65e2aed3033a8e808a80b011bd765a1d0a5dee213bb14676830ba"} Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.051173 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76ccd9588d-b65nb_4162adc0-1edf-4059-a924-ef743026eda4/neutron-httpd/2.log" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.052205 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76ccd9588d-b65nb" event={"ID":"4162adc0-1edf-4059-a924-ef743026eda4","Type":"ContainerDied","Data":"63eb565e0f828d719bd95cd57a9bd0ac57fe8bbddde200d765c5a7c4e0a7157d"} Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.052241 4637 scope.go:117] "RemoveContainer" containerID="aa42fe3047e8d39dbb4caa91b253485b3d79bfaa9d48cdfe41701fcac8440e92" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.052734 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76ccd9588d-b65nb" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.073274 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7756bcbc94-flscs" event={"ID":"bdf391ef-d734-4a25-9726-c7254f4abc1a","Type":"ContainerDied","Data":"c6e00dd1c2cc2977bb60a410f3b78af59dd525cb7a28a74c4451e641b443c094"} Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.073460 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7756bcbc94-flscs" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.076883 4637 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4162adc0-1edf-4059-a924-ef743026eda4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.078964 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6k8ct" event={"ID":"44eb4d02-ab26-4592-9b34-698fe46a8c51","Type":"ContainerStarted","Data":"a210dd9a917db6103e44153f32d7f1c208ae0e84f5f5d165c7b27532e3aaebe9"} Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.085562 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64585bdddb-h9hvw" event={"ID":"29e960b7-8574-4c38-bb22-67f5a77aaca6","Type":"ContainerDied","Data":"6f96765cfc7534647988b65e28c0a422686925ee42553d89bd82da2da3a3bcad"} Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.085757 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64585bdddb-h9hvw" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.095781 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerStarted","Data":"b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a"} Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.096125 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="ceilometer-central-agent" containerID="cri-o://e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590" gracePeriod=30 Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.096693 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="proxy-httpd" containerID="cri-o://b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a" gracePeriod=30 Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.096772 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="sg-core" containerID="cri-o://fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4" gracePeriod=30 Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.096848 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="ceilometer-notification-agent" containerID="cri-o://6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500" gracePeriod=30 Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.097299 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.114508 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-25b5n" event={"ID":"1ceb774a-fe03-4b01-a371-d0a8b10e6b3d","Type":"ContainerStarted","Data":"234ddf3498330dbcb8f69b3b79188f81b26de619a8d9b07bb9a41c78fb901f9a"} Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.114534 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vlwgj" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.114952 4637 scope.go:117] "RemoveContainer" containerID="08f5f48d30b8b32e703127f1a7e1645a00d1778edeb91de396ceebe17595ff4f" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.140834 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.812430284 podStartE2EDuration="21.140814502s" podCreationTimestamp="2025-12-01 15:04:54 +0000 UTC" firstStartedPulling="2025-12-01 15:04:55.651070272 +0000 UTC m=+1146.168779100" lastFinishedPulling="2025-12-01 15:05:13.97945449 +0000 UTC m=+1164.497163318" observedRunningTime="2025-12-01 15:05:15.140148263 +0000 UTC m=+1165.657857091" watchObservedRunningTime="2025-12-01 15:05:15.140814502 +0000 UTC m=+1165.658523330" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.174193 4637 scope.go:117] "RemoveContainer" containerID="fd402de5b7c0948863d9dcaf6ee7a38b36d87c1727ef90984d357a19fab97b43" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.229803 4637 scope.go:117] "RemoveContainer" containerID="dce303b4b49cde975dfdcc16ee67330e1a7299498e48ba58dfd56bbac51376b3" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.251360 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64585bdddb-h9hvw"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.264304 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64585bdddb-h9hvw"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.265555 4637 scope.go:117] "RemoveContainer" containerID="d5804f49e6c3c4ca3bca0e4b57289f3cace1c1cf7586c4a959a71634341596a3" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.280162 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76ccd9588d-b65nb"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.298861 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-76ccd9588d-b65nb"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.351637 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7756bcbc94-flscs"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.357918 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7756bcbc94-flscs"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.453785 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.467546 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.467662 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.467791 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.467859 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.467958 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" containerName="cinder-db-sync" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.468043 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" containerName="cinder-db-sync" Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.468121 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api-log" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.468195 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api-log" Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.468268 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.468349 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.468405 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-api" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.468457 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-api" Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.468512 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.468563 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.468615 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.468667 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api" Dec 01 15:05:15 crc kubenswrapper[4637]: E1201 15:05:15.468719 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon-log" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.468765 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon-log" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.469062 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.469143 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon-log" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.469208 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" containerName="cinder-db-sync" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.471031 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" containerName="horizon" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.471107 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.471160 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api-log" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.471220 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-api" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.471284 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" containerName="barbican-api" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.471723 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="4162adc0-1edf-4059-a924-ef743026eda4" containerName="neutron-httpd" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.479292 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.483679 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.484031 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.485200 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.485524 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m5gn9" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.554730 4637 scope.go:117] "RemoveContainer" containerID="dc893b69f071bb1a0f4f2bb759966b8e137cd8b71a954090eda6934ed381e989" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.592107 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vbrb\" (UniqueName: \"kubernetes.io/projected/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-kube-api-access-2vbrb\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.592186 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-scripts\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.592230 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.592305 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.592348 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.592419 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.619155 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.620120 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.620164 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.620200 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.620865 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b32cce2c47c067f58e7391d6910b6c3148987eb146b9d1e7fc73d2cd86f483da"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.620918 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://b32cce2c47c067f58e7391d6910b6c3148987eb146b9d1e7fc73d2cd86f483da" gracePeriod=600 Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.644912 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-589zr"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.649447 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.670720 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-589zr"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.697603 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.697718 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.699032 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.699158 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.699218 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vbrb\" (UniqueName: \"kubernetes.io/projected/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-kube-api-access-2vbrb\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.700273 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-scripts\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.700331 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.707714 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.707885 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-scripts\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.719414 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.719833 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.744777 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vbrb\" (UniqueName: \"kubernetes.io/projected/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-kube-api-access-2vbrb\") pod \"cinder-scheduler-0\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.798996 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.803165 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.803208 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-config\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.803252 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwdk4\" (UniqueName: \"kubernetes.io/projected/9b600fb5-a1de-4b48-93bb-bca6eaebca44-kube-api-access-fwdk4\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.803285 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.803401 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.803455 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.820593 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e960b7-8574-4c38-bb22-67f5a77aaca6" path="/var/lib/kubelet/pods/29e960b7-8574-4c38-bb22-67f5a77aaca6/volumes" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.824863 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4162adc0-1edf-4059-a924-ef743026eda4" path="/var/lib/kubelet/pods/4162adc0-1edf-4059-a924-ef743026eda4/volumes" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.826009 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf391ef-d734-4a25-9726-c7254f4abc1a" path="/var/lib/kubelet/pods/bdf391ef-d734-4a25-9726-c7254f4abc1a/volumes" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.826668 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.828797 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.828897 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.835126 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.907231 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.907306 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.907342 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.907376 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-config\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.907409 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwdk4\" (UniqueName: \"kubernetes.io/projected/9b600fb5-a1de-4b48-93bb-bca6eaebca44-kube-api-access-fwdk4\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.907457 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.908265 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.910880 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.912061 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.914432 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.914794 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-config\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.942906 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwdk4\" (UniqueName: \"kubernetes.io/projected/9b600fb5-a1de-4b48-93bb-bca6eaebca44-kube-api-access-fwdk4\") pod \"dnsmasq-dns-6bb4fc677f-589zr\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:15 crc kubenswrapper[4637]: I1201 15:05:15.989923 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.008993 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.009049 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-logs\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.009081 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.009136 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-scripts\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.009165 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nznbh\" (UniqueName: \"kubernetes.io/projected/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-kube-api-access-nznbh\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.009207 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data-custom\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.009227 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.110821 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-scripts\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.111293 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nznbh\" (UniqueName: \"kubernetes.io/projected/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-kube-api-access-nznbh\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.111771 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data-custom\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.111797 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.111860 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.111878 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-logs\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.111904 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.114231 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.114630 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-logs\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.115677 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.116575 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.117070 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-scripts\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.120434 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data-custom\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.138529 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nznbh\" (UniqueName: \"kubernetes.io/projected/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-kube-api-access-nznbh\") pod \"cinder-api-0\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.166020 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.167131 4637 generic.go:334] "Generic (PLEG): container finished" podID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerID="b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a" exitCode=0 Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.167181 4637 generic.go:334] "Generic (PLEG): container finished" podID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerID="fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4" exitCode=2 Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.167188 4637 generic.go:334] "Generic (PLEG): container finished" podID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerID="e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590" exitCode=0 Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.167248 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerDied","Data":"b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a"} Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.167278 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerDied","Data":"fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4"} Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.167288 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerDied","Data":"e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590"} Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.179234 4637 generic.go:334] "Generic (PLEG): container finished" podID="1ceb774a-fe03-4b01-a371-d0a8b10e6b3d" containerID="6d8a29b4e4e8cbb560dfff820c172d5a60de4077c174cbff8cbb05a068fc28af" exitCode=0 Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.179329 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-25b5n" event={"ID":"1ceb774a-fe03-4b01-a371-d0a8b10e6b3d","Type":"ContainerDied","Data":"6d8a29b4e4e8cbb560dfff820c172d5a60de4077c174cbff8cbb05a068fc28af"} Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.182215 4637 generic.go:334] "Generic (PLEG): container finished" podID="7e6ab34a-72f6-4908-a148-e08b308afb1a" containerID="38274d56c7e6f98509e46809c04207b9585746ab7c01597e6026bb1a03374e61" exitCode=0 Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.182272 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g96w2" event={"ID":"7e6ab34a-72f6-4908-a148-e08b308afb1a","Type":"ContainerDied","Data":"38274d56c7e6f98509e46809c04207b9585746ab7c01597e6026bb1a03374e61"} Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.235837 4637 generic.go:334] "Generic (PLEG): container finished" podID="44eb4d02-ab26-4592-9b34-698fe46a8c51" containerID="a340851ca1700ebddd37a2f7dded0b5519a9d28e492139ee8ebabda9298898b1" exitCode=0 Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.236358 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6k8ct" event={"ID":"44eb4d02-ab26-4592-9b34-698fe46a8c51","Type":"ContainerDied","Data":"a340851ca1700ebddd37a2f7dded0b5519a9d28e492139ee8ebabda9298898b1"} Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.245794 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="b32cce2c47c067f58e7391d6910b6c3148987eb146b9d1e7fc73d2cd86f483da" exitCode=0 Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.245834 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"b32cce2c47c067f58e7391d6910b6c3148987eb146b9d1e7fc73d2cd86f483da"} Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.245868 4637 scope.go:117] "RemoveContainer" containerID="e979781fddd064342f7468d039fd5c3c7d452779d2cd7d5b9f3797e85de0bed3" Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.438485 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:16 crc kubenswrapper[4637]: W1201 15:05:16.444538 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a0b3365_7330_4f1b_a63d_f814cdc5ac62.slice/crio-428eeb7741a9a3427473ceec4d856e65694673ad652f3d21e1c6e80d41f299f6 WatchSource:0}: Error finding container 428eeb7741a9a3427473ceec4d856e65694673ad652f3d21e1c6e80d41f299f6: Status 404 returned error can't find the container with id 428eeb7741a9a3427473ceec4d856e65694673ad652f3d21e1c6e80d41f299f6 Dec 01 15:05:16 crc kubenswrapper[4637]: W1201 15:05:16.619552 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b600fb5_a1de_4b48_93bb_bca6eaebca44.slice/crio-0505c08b1cabfb53710f20f1156f9fa05844f597fedc236594af056c769143c6 WatchSource:0}: Error finding container 0505c08b1cabfb53710f20f1156f9fa05844f597fedc236594af056c769143c6: Status 404 returned error can't find the container with id 0505c08b1cabfb53710f20f1156f9fa05844f597fedc236594af056c769143c6 Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.622849 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-589zr"] Dec 01 15:05:16 crc kubenswrapper[4637]: I1201 15:05:16.816274 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:17 crc kubenswrapper[4637]: I1201 15:05:17.265517 4637 generic.go:334] "Generic (PLEG): container finished" podID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" containerID="6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd" exitCode=0 Dec 01 15:05:17 crc kubenswrapper[4637]: I1201 15:05:17.265863 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" event={"ID":"9b600fb5-a1de-4b48-93bb-bca6eaebca44","Type":"ContainerDied","Data":"6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd"} Dec 01 15:05:17 crc kubenswrapper[4637]: I1201 15:05:17.265895 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" event={"ID":"9b600fb5-a1de-4b48-93bb-bca6eaebca44","Type":"ContainerStarted","Data":"0505c08b1cabfb53710f20f1156f9fa05844f597fedc236594af056c769143c6"} Dec 01 15:05:17 crc kubenswrapper[4637]: I1201 15:05:17.283068 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a0b3365-7330-4f1b-a63d-f814cdc5ac62","Type":"ContainerStarted","Data":"428eeb7741a9a3427473ceec4d856e65694673ad652f3d21e1c6e80d41f299f6"} Dec 01 15:05:17 crc kubenswrapper[4637]: I1201 15:05:17.301275 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"0c5320ef3b59baf3d6a19e6cd72f308b7cd46bf2e7050ff92c6f67ab6ef1839a"} Dec 01 15:05:17 crc kubenswrapper[4637]: I1201 15:05:17.327628 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9","Type":"ContainerStarted","Data":"0dad4ace96323afa77011172880ed72d961a5e9d2c95e43b19e4465b2673dbe2"} Dec 01 15:05:17 crc kubenswrapper[4637]: I1201 15:05:17.993836 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6k8ct" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.031753 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9lmn\" (UniqueName: \"kubernetes.io/projected/44eb4d02-ab26-4592-9b34-698fe46a8c51-kube-api-access-g9lmn\") pod \"44eb4d02-ab26-4592-9b34-698fe46a8c51\" (UID: \"44eb4d02-ab26-4592-9b34-698fe46a8c51\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.085588 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44eb4d02-ab26-4592-9b34-698fe46a8c51-kube-api-access-g9lmn" (OuterVolumeSpecName: "kube-api-access-g9lmn") pod "44eb4d02-ab26-4592-9b34-698fe46a8c51" (UID: "44eb4d02-ab26-4592-9b34-698fe46a8c51"). InnerVolumeSpecName "kube-api-access-g9lmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.121533 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-25b5n" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.140402 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pj7h\" (UniqueName: \"kubernetes.io/projected/1ceb774a-fe03-4b01-a371-d0a8b10e6b3d-kube-api-access-6pj7h\") pod \"1ceb774a-fe03-4b01-a371-d0a8b10e6b3d\" (UID: \"1ceb774a-fe03-4b01-a371-d0a8b10e6b3d\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.140834 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9lmn\" (UniqueName: \"kubernetes.io/projected/44eb4d02-ab26-4592-9b34-698fe46a8c51-kube-api-access-g9lmn\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.174337 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ceb774a-fe03-4b01-a371-d0a8b10e6b3d-kube-api-access-6pj7h" (OuterVolumeSpecName: "kube-api-access-6pj7h") pod "1ceb774a-fe03-4b01-a371-d0a8b10e6b3d" (UID: "1ceb774a-fe03-4b01-a371-d0a8b10e6b3d"). InnerVolumeSpecName "kube-api-access-6pj7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.188429 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g96w2" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.230045 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.241548 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-combined-ca-bundle\") pod \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.241603 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm9xb\" (UniqueName: \"kubernetes.io/projected/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-kube-api-access-sm9xb\") pod \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.241717 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j249\" (UniqueName: \"kubernetes.io/projected/7e6ab34a-72f6-4908-a148-e08b308afb1a-kube-api-access-7j249\") pod \"7e6ab34a-72f6-4908-a148-e08b308afb1a\" (UID: \"7e6ab34a-72f6-4908-a148-e08b308afb1a\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.241747 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-log-httpd\") pod \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.241775 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-config-data\") pod \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.241827 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-sg-core-conf-yaml\") pod \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.241886 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-scripts\") pod \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.241915 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-run-httpd\") pod \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\" (UID: \"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2\") " Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.242350 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pj7h\" (UniqueName: \"kubernetes.io/projected/1ceb774a-fe03-4b01-a371-d0a8b10e6b3d-kube-api-access-6pj7h\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.242709 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" (UID: "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.263321 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-kube-api-access-sm9xb" (OuterVolumeSpecName: "kube-api-access-sm9xb") pod "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" (UID: "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2"). InnerVolumeSpecName "kube-api-access-sm9xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.263479 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" (UID: "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.273103 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6ab34a-72f6-4908-a148-e08b308afb1a-kube-api-access-7j249" (OuterVolumeSpecName: "kube-api-access-7j249") pod "7e6ab34a-72f6-4908-a148-e08b308afb1a" (UID: "7e6ab34a-72f6-4908-a148-e08b308afb1a"). InnerVolumeSpecName "kube-api-access-7j249". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.275841 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-scripts" (OuterVolumeSpecName: "scripts") pod "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" (UID: "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.319117 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" (UID: "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.343792 4637 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.343817 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.343828 4637 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.343838 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm9xb\" (UniqueName: \"kubernetes.io/projected/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-kube-api-access-sm9xb\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.343848 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j249\" (UniqueName: \"kubernetes.io/projected/7e6ab34a-72f6-4908-a148-e08b308afb1a-kube-api-access-7j249\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.343856 4637 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.357634 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g96w2" event={"ID":"7e6ab34a-72f6-4908-a148-e08b308afb1a","Type":"ContainerDied","Data":"d7ae8df910b65e2aed3033a8e808a80b011bd765a1d0a5dee213bb14676830ba"} Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.357679 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7ae8df910b65e2aed3033a8e808a80b011bd765a1d0a5dee213bb14676830ba" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.357744 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g96w2" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.392295 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6k8ct" event={"ID":"44eb4d02-ab26-4592-9b34-698fe46a8c51","Type":"ContainerDied","Data":"a210dd9a917db6103e44153f32d7f1c208ae0e84f5f5d165c7b27532e3aaebe9"} Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.392339 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a210dd9a917db6103e44153f32d7f1c208ae0e84f5f5d165c7b27532e3aaebe9" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.392406 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6k8ct" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.413515 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9","Type":"ContainerStarted","Data":"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459"} Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.425427 4637 generic.go:334] "Generic (PLEG): container finished" podID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerID="6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500" exitCode=0 Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.425552 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerDied","Data":"6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500"} Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.425583 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"723c6c14-68e1-4e4e-9858-8fdfb95cfdb2","Type":"ContainerDied","Data":"71edfc29cd9391565cff4cf9409ab2687647839ff50137fcb0a859d04c370ca9"} Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.425618 4637 scope.go:117] "RemoveContainer" containerID="b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.426160 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.431438 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-25b5n" event={"ID":"1ceb774a-fe03-4b01-a371-d0a8b10e6b3d","Type":"ContainerDied","Data":"234ddf3498330dbcb8f69b3b79188f81b26de619a8d9b07bb9a41c78fb901f9a"} Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.431465 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="234ddf3498330dbcb8f69b3b79188f81b26de619a8d9b07bb9a41c78fb901f9a" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.431510 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-25b5n" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.438672 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" event={"ID":"9b600fb5-a1de-4b48-93bb-bca6eaebca44","Type":"ContainerStarted","Data":"5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c"} Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.439740 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.440711 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" (UID: "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.445030 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.465794 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-config-data" (OuterVolumeSpecName: "config-data") pod "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" (UID: "723c6c14-68e1-4e4e-9858-8fdfb95cfdb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.477045 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" podStartSLOduration=3.477023826 podStartE2EDuration="3.477023826s" podCreationTimestamp="2025-12-01 15:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:05:18.458893605 +0000 UTC m=+1168.976602423" watchObservedRunningTime="2025-12-01 15:05:18.477023826 +0000 UTC m=+1168.994732654" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.497416 4637 scope.go:117] "RemoveContainer" containerID="fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.527901 4637 scope.go:117] "RemoveContainer" containerID="6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.546500 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.591873 4637 scope.go:117] "RemoveContainer" containerID="e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.647319 4637 scope.go:117] "RemoveContainer" containerID="b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.648174 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a\": container with ID starting with b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a not found: ID does not exist" containerID="b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.648217 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a"} err="failed to get container status \"b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a\": rpc error: code = NotFound desc = could not find container \"b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a\": container with ID starting with b3756e1cdf99378a448011dbf16a5634ddd75cd69bcb866f02b308662c2f3e8a not found: ID does not exist" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.648248 4637 scope.go:117] "RemoveContainer" containerID="fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.650879 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4\": container with ID starting with fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4 not found: ID does not exist" containerID="fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.650906 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4"} err="failed to get container status \"fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4\": rpc error: code = NotFound desc = could not find container \"fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4\": container with ID starting with fd97911c76bfc3ddbe65dfee2be70cc81f03cb656db16ea141a99959c086c9d4 not found: ID does not exist" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.650923 4637 scope.go:117] "RemoveContainer" containerID="6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.651735 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500\": container with ID starting with 6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500 not found: ID does not exist" containerID="6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.651817 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500"} err="failed to get container status \"6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500\": rpc error: code = NotFound desc = could not find container \"6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500\": container with ID starting with 6b491ff33ed230f12f09b7b5b0f7b0025127144bc4c49bedfcf0ef506ea4f500 not found: ID does not exist" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.651870 4637 scope.go:117] "RemoveContainer" containerID="e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.653896 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590\": container with ID starting with e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590 not found: ID does not exist" containerID="e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.653965 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590"} err="failed to get container status \"e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590\": rpc error: code = NotFound desc = could not find container \"e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590\": container with ID starting with e08865f7ec1c259f66a2a6cac5169f634dbdd0e30a6aebda05222ae6ecae2590 not found: ID does not exist" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.884999 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.901883 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943010 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.943414 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44eb4d02-ab26-4592-9b34-698fe46a8c51" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943430 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="44eb4d02-ab26-4592-9b34-698fe46a8c51" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.943447 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="sg-core" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943455 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="sg-core" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.943466 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6ab34a-72f6-4908-a148-e08b308afb1a" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943472 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6ab34a-72f6-4908-a148-e08b308afb1a" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.943484 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="proxy-httpd" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943489 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="proxy-httpd" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.943499 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="ceilometer-central-agent" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943504 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="ceilometer-central-agent" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.943512 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="ceilometer-notification-agent" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943519 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="ceilometer-notification-agent" Dec 01 15:05:18 crc kubenswrapper[4637]: E1201 15:05:18.943540 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ceb774a-fe03-4b01-a371-d0a8b10e6b3d" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943545 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ceb774a-fe03-4b01-a371-d0a8b10e6b3d" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943702 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6ab34a-72f6-4908-a148-e08b308afb1a" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943716 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="sg-core" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943726 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="ceilometer-central-agent" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943740 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="proxy-httpd" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943749 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" containerName="ceilometer-notification-agent" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943760 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="44eb4d02-ab26-4592-9b34-698fe46a8c51" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.943770 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ceb774a-fe03-4b01-a371-d0a8b10e6b3d" containerName="mariadb-database-create" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.959161 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.976567 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.988791 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:18 crc kubenswrapper[4637]: I1201 15:05:18.999309 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.058005 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-config-data\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.058087 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.058108 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.058138 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-log-httpd\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.058220 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-run-httpd\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.058250 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvv2t\" (UniqueName: \"kubernetes.io/projected/c0d20541-3861-4d08-89d8-f62a23d65bb9-kube-api-access-tvv2t\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.058284 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-scripts\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.163844 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-scripts\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.202293 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-config-data\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.202920 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.173632 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-scripts\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.203022 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.204567 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-log-httpd\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.204826 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-run-httpd\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.205013 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvv2t\" (UniqueName: \"kubernetes.io/projected/c0d20541-3861-4d08-89d8-f62a23d65bb9-kube-api-access-tvv2t\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.205260 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-run-httpd\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.206550 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-log-httpd\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.207604 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-config-data\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.244357 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.274786 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvv2t\" (UniqueName: \"kubernetes.io/projected/c0d20541-3861-4d08-89d8-f62a23d65bb9-kube-api-access-tvv2t\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.281310 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.336219 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.443959 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.815123 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="723c6c14-68e1-4e4e-9858-8fdfb95cfdb2" path="/var/lib/kubelet/pods/723c6c14-68e1-4e4e-9858-8fdfb95cfdb2/volumes" Dec 01 15:05:19 crc kubenswrapper[4637]: I1201 15:05:19.986370 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.474970 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a0b3365-7330-4f1b-a63d-f814cdc5ac62","Type":"ContainerStarted","Data":"3401204fbd1b19d46166e24e49ea04ab97befb9cbed0ec07bf31d03a157ce419"} Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.475312 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a0b3365-7330-4f1b-a63d-f814cdc5ac62","Type":"ContainerStarted","Data":"126d6d4b452460a27f05d362cdeb9d99a809ab298964b0e7d5c438f9873d5613"} Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.477014 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9","Type":"ContainerStarted","Data":"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92"} Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.477200 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerName="cinder-api-log" containerID="cri-o://7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459" gracePeriod=30 Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.477469 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.477506 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerName="cinder-api" containerID="cri-o://e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92" gracePeriod=30 Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.480603 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerStarted","Data":"57f3a48466200a21881efcbab4f0a9d33bb67342ef8592637e325fc569feb743"} Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.498168 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.099706397 podStartE2EDuration="5.498149602s" podCreationTimestamp="2025-12-01 15:05:15 +0000 UTC" firstStartedPulling="2025-12-01 15:05:16.451269244 +0000 UTC m=+1166.968978072" lastFinishedPulling="2025-12-01 15:05:17.849712449 +0000 UTC m=+1168.367421277" observedRunningTime="2025-12-01 15:05:20.496421366 +0000 UTC m=+1171.014130194" watchObservedRunningTime="2025-12-01 15:05:20.498149602 +0000 UTC m=+1171.015858420" Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.519332 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.519316955 podStartE2EDuration="5.519316955s" podCreationTimestamp="2025-12-01 15:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:05:20.517390883 +0000 UTC m=+1171.035099711" watchObservedRunningTime="2025-12-01 15:05:20.519316955 +0000 UTC m=+1171.037025783" Dec 01 15:05:20 crc kubenswrapper[4637]: I1201 15:05:20.799630 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 15:05:21 crc kubenswrapper[4637]: E1201 15:05:21.011983 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32fb52c1_99c2_4ce4_a553_fb5f1760e4c9.slice/crio-conmon-e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92.scope\": RecentStats: unable to find data in memory cache]" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.205304 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.270995 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-etc-machine-id\") pod \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.271156 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data-custom\") pod \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.271146 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" (UID: "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.272537 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data\") pod \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.272846 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-combined-ca-bundle\") pod \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.272885 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-logs\") pod \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.272913 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-scripts\") pod \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.273023 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nznbh\" (UniqueName: \"kubernetes.io/projected/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-kube-api-access-nznbh\") pod \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\" (UID: \"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9\") " Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.273530 4637 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.277490 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-logs" (OuterVolumeSpecName: "logs") pod "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" (UID: "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.279155 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" (UID: "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.293054 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-scripts" (OuterVolumeSpecName: "scripts") pod "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" (UID: "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.293359 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-kube-api-access-nznbh" (OuterVolumeSpecName: "kube-api-access-nznbh") pod "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" (UID: "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9"). InnerVolumeSpecName "kube-api-access-nznbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.325475 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" (UID: "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.373153 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data" (OuterVolumeSpecName: "config-data") pod "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" (UID: "32fb52c1-99c2-4ce4-a553-fb5f1760e4c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.375381 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.375409 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.375419 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.375428 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nznbh\" (UniqueName: \"kubernetes.io/projected/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-kube-api-access-nznbh\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.375438 4637 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.375445 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.512822 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerStarted","Data":"acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b"} Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.518442 4637 generic.go:334] "Generic (PLEG): container finished" podID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerID="e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92" exitCode=0 Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.518479 4637 generic.go:334] "Generic (PLEG): container finished" podID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerID="7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459" exitCode=143 Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.520242 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9","Type":"ContainerDied","Data":"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92"} Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.520263 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.520294 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9","Type":"ContainerDied","Data":"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459"} Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.520312 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"32fb52c1-99c2-4ce4-a553-fb5f1760e4c9","Type":"ContainerDied","Data":"0dad4ace96323afa77011172880ed72d961a5e9d2c95e43b19e4465b2673dbe2"} Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.520339 4637 scope.go:117] "RemoveContainer" containerID="e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.608076 4637 scope.go:117] "RemoveContainer" containerID="7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.624054 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.641408 4637 scope.go:117] "RemoveContainer" containerID="e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.644269 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:21 crc kubenswrapper[4637]: E1201 15:05:21.644459 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92\": container with ID starting with e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92 not found: ID does not exist" containerID="e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.644539 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92"} err="failed to get container status \"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92\": rpc error: code = NotFound desc = could not find container \"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92\": container with ID starting with e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92 not found: ID does not exist" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.644560 4637 scope.go:117] "RemoveContainer" containerID="7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459" Dec 01 15:05:21 crc kubenswrapper[4637]: E1201 15:05:21.646097 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459\": container with ID starting with 7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459 not found: ID does not exist" containerID="7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.646126 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459"} err="failed to get container status \"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459\": rpc error: code = NotFound desc = could not find container \"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459\": container with ID starting with 7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459 not found: ID does not exist" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.646142 4637 scope.go:117] "RemoveContainer" containerID="e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.646623 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92"} err="failed to get container status \"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92\": rpc error: code = NotFound desc = could not find container \"e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92\": container with ID starting with e4e936dc2002fc434b74189008eb6daa39565c2b43e0ab42e7c03e63ad8c1e92 not found: ID does not exist" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.646658 4637 scope.go:117] "RemoveContainer" containerID="7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.646883 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459"} err="failed to get container status \"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459\": rpc error: code = NotFound desc = could not find container \"7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459\": container with ID starting with 7e0e4d217f270241d0ab3861ec37185817f4c3167948738de1b2010fc45f1459 not found: ID does not exist" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.653027 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:21 crc kubenswrapper[4637]: E1201 15:05:21.655252 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerName="cinder-api-log" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.655296 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerName="cinder-api-log" Dec 01 15:05:21 crc kubenswrapper[4637]: E1201 15:05:21.655341 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerName="cinder-api" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.655348 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerName="cinder-api" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.655707 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerName="cinder-api-log" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.655730 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" containerName="cinder-api" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.656827 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.660873 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.666449 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.666710 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.666887 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784519 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-scripts\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784573 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784590 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-config-data\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784608 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2x4h\" (UniqueName: \"kubernetes.io/projected/8a976758-2d0a-43be-ad2f-69b00b2fec4a-kube-api-access-c2x4h\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784636 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a976758-2d0a-43be-ad2f-69b00b2fec4a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784673 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784698 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a976758-2d0a-43be-ad2f-69b00b2fec4a-logs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784777 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.784800 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.788207 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32fb52c1-99c2-4ce4-a553-fb5f1760e4c9" path="/var/lib/kubelet/pods/32fb52c1-99c2-4ce4-a553-fb5f1760e4c9/volumes" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.886317 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.887569 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.887605 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-scripts\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.888500 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.888518 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-config-data\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.888535 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2x4h\" (UniqueName: \"kubernetes.io/projected/8a976758-2d0a-43be-ad2f-69b00b2fec4a-kube-api-access-c2x4h\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.888582 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a976758-2d0a-43be-ad2f-69b00b2fec4a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.888632 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.888662 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a976758-2d0a-43be-ad2f-69b00b2fec4a-logs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.888807 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a976758-2d0a-43be-ad2f-69b00b2fec4a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.890097 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a976758-2d0a-43be-ad2f-69b00b2fec4a-logs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.900350 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.907187 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.907377 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.909358 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-scripts\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.910764 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.922601 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2x4h\" (UniqueName: \"kubernetes.io/projected/8a976758-2d0a-43be-ad2f-69b00b2fec4a-kube-api-access-c2x4h\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:21 crc kubenswrapper[4637]: I1201 15:05:21.923503 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a976758-2d0a-43be-ad2f-69b00b2fec4a-config-data\") pod \"cinder-api-0\" (UID: \"8a976758-2d0a-43be-ad2f-69b00b2fec4a\") " pod="openstack/cinder-api-0" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.003624 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.178212 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9f3a-account-create-thgql"] Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.222788 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9f3a-account-create-thgql" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.236798 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.246664 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9f3a-account-create-thgql"] Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.321704 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52mt\" (UniqueName: \"kubernetes.io/projected/d51c2122-e195-4811-b65a-677e548f80ea-kube-api-access-m52mt\") pod \"nova-api-9f3a-account-create-thgql\" (UID: \"d51c2122-e195-4811-b65a-677e548f80ea\") " pod="openstack/nova-api-9f3a-account-create-thgql" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.420542 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4785-account-create-88s8h"] Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.423178 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4785-account-create-88s8h" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.425994 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m52mt\" (UniqueName: \"kubernetes.io/projected/d51c2122-e195-4811-b65a-677e548f80ea-kube-api-access-m52mt\") pod \"nova-api-9f3a-account-create-thgql\" (UID: \"d51c2122-e195-4811-b65a-677e548f80ea\") " pod="openstack/nova-api-9f3a-account-create-thgql" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.435211 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.446310 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4785-account-create-88s8h"] Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.460685 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m52mt\" (UniqueName: \"kubernetes.io/projected/d51c2122-e195-4811-b65a-677e548f80ea-kube-api-access-m52mt\") pod \"nova-api-9f3a-account-create-thgql\" (UID: \"d51c2122-e195-4811-b65a-677e548f80ea\") " pod="openstack/nova-api-9f3a-account-create-thgql" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.528121 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsz72\" (UniqueName: \"kubernetes.io/projected/883863aa-96fa-4ca9-b354-31239ab536cc-kube-api-access-nsz72\") pod \"nova-cell0-4785-account-create-88s8h\" (UID: \"883863aa-96fa-4ca9-b354-31239ab536cc\") " pod="openstack/nova-cell0-4785-account-create-88s8h" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.548690 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerStarted","Data":"31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280"} Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.577112 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9f3a-account-create-thgql" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.582492 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-53ce-account-create-v7kbp"] Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.604999 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-53ce-account-create-v7kbp" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.609392 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-53ce-account-create-v7kbp"] Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.609621 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.629475 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsz72\" (UniqueName: \"kubernetes.io/projected/883863aa-96fa-4ca9-b354-31239ab536cc-kube-api-access-nsz72\") pod \"nova-cell0-4785-account-create-88s8h\" (UID: \"883863aa-96fa-4ca9-b354-31239ab536cc\") " pod="openstack/nova-cell0-4785-account-create-88s8h" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.658696 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsz72\" (UniqueName: \"kubernetes.io/projected/883863aa-96fa-4ca9-b354-31239ab536cc-kube-api-access-nsz72\") pod \"nova-cell0-4785-account-create-88s8h\" (UID: \"883863aa-96fa-4ca9-b354-31239ab536cc\") " pod="openstack/nova-cell0-4785-account-create-88s8h" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.686400 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 15:05:22 crc kubenswrapper[4637]: W1201 15:05:22.692216 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a976758_2d0a_43be_ad2f_69b00b2fec4a.slice/crio-e7da8fedab947775e04c1634648c65be06b2e775d4131b301bda473d08f41eac WatchSource:0}: Error finding container e7da8fedab947775e04c1634648c65be06b2e775d4131b301bda473d08f41eac: Status 404 returned error can't find the container with id e7da8fedab947775e04c1634648c65be06b2e775d4131b301bda473d08f41eac Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.731309 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9c7\" (UniqueName: \"kubernetes.io/projected/152d8240-ac76-4db8-b611-e1a3e62a91c6-kube-api-access-jb9c7\") pod \"nova-cell1-53ce-account-create-v7kbp\" (UID: \"152d8240-ac76-4db8-b611-e1a3e62a91c6\") " pod="openstack/nova-cell1-53ce-account-create-v7kbp" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.797600 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4785-account-create-88s8h" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.835067 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9c7\" (UniqueName: \"kubernetes.io/projected/152d8240-ac76-4db8-b611-e1a3e62a91c6-kube-api-access-jb9c7\") pod \"nova-cell1-53ce-account-create-v7kbp\" (UID: \"152d8240-ac76-4db8-b611-e1a3e62a91c6\") " pod="openstack/nova-cell1-53ce-account-create-v7kbp" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.867137 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9c7\" (UniqueName: \"kubernetes.io/projected/152d8240-ac76-4db8-b611-e1a3e62a91c6-kube-api-access-jb9c7\") pod \"nova-cell1-53ce-account-create-v7kbp\" (UID: \"152d8240-ac76-4db8-b611-e1a3e62a91c6\") " pod="openstack/nova-cell1-53ce-account-create-v7kbp" Dec 01 15:05:22 crc kubenswrapper[4637]: I1201 15:05:22.940021 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-53ce-account-create-v7kbp" Dec 01 15:05:23 crc kubenswrapper[4637]: I1201 15:05:23.240176 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9f3a-account-create-thgql"] Dec 01 15:05:23 crc kubenswrapper[4637]: I1201 15:05:23.492693 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4785-account-create-88s8h"] Dec 01 15:05:23 crc kubenswrapper[4637]: I1201 15:05:23.659169 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerStarted","Data":"248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa"} Dec 01 15:05:23 crc kubenswrapper[4637]: I1201 15:05:23.677174 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9f3a-account-create-thgql" event={"ID":"d51c2122-e195-4811-b65a-677e548f80ea","Type":"ContainerStarted","Data":"af47c644a45beb5f7341c7e3420f0a5e01048a012365eeab1447b44c430d999f"} Dec 01 15:05:23 crc kubenswrapper[4637]: I1201 15:05:23.680753 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8a976758-2d0a-43be-ad2f-69b00b2fec4a","Type":"ContainerStarted","Data":"e7da8fedab947775e04c1634648c65be06b2e775d4131b301bda473d08f41eac"} Dec 01 15:05:23 crc kubenswrapper[4637]: I1201 15:05:23.689770 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4785-account-create-88s8h" event={"ID":"883863aa-96fa-4ca9-b354-31239ab536cc","Type":"ContainerStarted","Data":"77797b871fbeecdd899a7d9695197d2bc0f1573ca965d83e7a0753f07a9a499a"} Dec 01 15:05:23 crc kubenswrapper[4637]: I1201 15:05:23.804293 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9f3a-account-create-thgql" podStartSLOduration=1.804264183 podStartE2EDuration="1.804264183s" podCreationTimestamp="2025-12-01 15:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:05:23.707435663 +0000 UTC m=+1174.225144491" watchObservedRunningTime="2025-12-01 15:05:23.804264183 +0000 UTC m=+1174.321973011" Dec 01 15:05:23 crc kubenswrapper[4637]: I1201 15:05:23.809359 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-53ce-account-create-v7kbp"] Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.739066 4637 generic.go:334] "Generic (PLEG): container finished" podID="d51c2122-e195-4811-b65a-677e548f80ea" containerID="bac74301707959187b020c731449d47d76995b9b889bf09b7d75d86b7d79eb64" exitCode=0 Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.739550 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9f3a-account-create-thgql" event={"ID":"d51c2122-e195-4811-b65a-677e548f80ea","Type":"ContainerDied","Data":"bac74301707959187b020c731449d47d76995b9b889bf09b7d75d86b7d79eb64"} Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.741787 4637 generic.go:334] "Generic (PLEG): container finished" podID="152d8240-ac76-4db8-b611-e1a3e62a91c6" containerID="358bb6428ff64f29600393815078fca71afd2d0d5037d1c704ece89828dacea1" exitCode=0 Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.741839 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-53ce-account-create-v7kbp" event={"ID":"152d8240-ac76-4db8-b611-e1a3e62a91c6","Type":"ContainerDied","Data":"358bb6428ff64f29600393815078fca71afd2d0d5037d1c704ece89828dacea1"} Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.741859 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-53ce-account-create-v7kbp" event={"ID":"152d8240-ac76-4db8-b611-e1a3e62a91c6","Type":"ContainerStarted","Data":"8a8b4a8a319afec5656b9c7eb92dcee3143a88c5823afb11c0b86c12b085a24e"} Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.744161 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8a976758-2d0a-43be-ad2f-69b00b2fec4a","Type":"ContainerStarted","Data":"7f7b9002402dd9ae7f185f13ca04811ee2fa6de9688dbeaeea809e8badebd6e6"} Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.745487 4637 generic.go:334] "Generic (PLEG): container finished" podID="883863aa-96fa-4ca9-b354-31239ab536cc" containerID="dd22e78fc9b73287b13f9364a578a07bf6ab507a5f3023e5474689f85cc9ce2f" exitCode=0 Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.745542 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4785-account-create-88s8h" event={"ID":"883863aa-96fa-4ca9-b354-31239ab536cc","Type":"ContainerDied","Data":"dd22e78fc9b73287b13f9364a578a07bf6ab507a5f3023e5474689f85cc9ce2f"} Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.748385 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerStarted","Data":"e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d"} Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.751143 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:05:24 crc kubenswrapper[4637]: I1201 15:05:24.852984 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.82068975 podStartE2EDuration="6.852906232s" podCreationTimestamp="2025-12-01 15:05:18 +0000 UTC" firstStartedPulling="2025-12-01 15:05:20.008551772 +0000 UTC m=+1170.526260600" lastFinishedPulling="2025-12-01 15:05:24.040768254 +0000 UTC m=+1174.558477082" observedRunningTime="2025-12-01 15:05:24.839396826 +0000 UTC m=+1175.357105654" watchObservedRunningTime="2025-12-01 15:05:24.852906232 +0000 UTC m=+1175.370615060" Dec 01 15:05:25 crc kubenswrapper[4637]: I1201 15:05:25.758789 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8a976758-2d0a-43be-ad2f-69b00b2fec4a","Type":"ContainerStarted","Data":"9d13302d5d120648b3cca497706752ae6f68198d98b77163f9b0e00982a7b12e"} Dec 01 15:05:25 crc kubenswrapper[4637]: I1201 15:05:25.786522 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.786505427 podStartE2EDuration="4.786505427s" podCreationTimestamp="2025-12-01 15:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:05:25.785012837 +0000 UTC m=+1176.302721665" watchObservedRunningTime="2025-12-01 15:05:25.786505427 +0000 UTC m=+1176.304214255" Dec 01 15:05:25 crc kubenswrapper[4637]: I1201 15:05:25.992324 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.074445 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-6h6t8"] Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.077175 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" podUID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" containerName="dnsmasq-dns" containerID="cri-o://a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2" gracePeriod=10 Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.289142 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.324732 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9f3a-account-create-thgql" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.373386 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.427372 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m52mt\" (UniqueName: \"kubernetes.io/projected/d51c2122-e195-4811-b65a-677e548f80ea-kube-api-access-m52mt\") pod \"d51c2122-e195-4811-b65a-677e548f80ea\" (UID: \"d51c2122-e195-4811-b65a-677e548f80ea\") " Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.446687 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51c2122-e195-4811-b65a-677e548f80ea-kube-api-access-m52mt" (OuterVolumeSpecName: "kube-api-access-m52mt") pod "d51c2122-e195-4811-b65a-677e548f80ea" (UID: "d51c2122-e195-4811-b65a-677e548f80ea"). InnerVolumeSpecName "kube-api-access-m52mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.533252 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m52mt\" (UniqueName: \"kubernetes.io/projected/d51c2122-e195-4811-b65a-677e548f80ea-kube-api-access-m52mt\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.624297 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-53ce-account-create-v7kbp" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.681657 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4785-account-create-88s8h" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.740480 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsz72\" (UniqueName: \"kubernetes.io/projected/883863aa-96fa-4ca9-b354-31239ab536cc-kube-api-access-nsz72\") pod \"883863aa-96fa-4ca9-b354-31239ab536cc\" (UID: \"883863aa-96fa-4ca9-b354-31239ab536cc\") " Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.742311 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb9c7\" (UniqueName: \"kubernetes.io/projected/152d8240-ac76-4db8-b611-e1a3e62a91c6-kube-api-access-jb9c7\") pod \"152d8240-ac76-4db8-b611-e1a3e62a91c6\" (UID: \"152d8240-ac76-4db8-b611-e1a3e62a91c6\") " Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.753668 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152d8240-ac76-4db8-b611-e1a3e62a91c6-kube-api-access-jb9c7" (OuterVolumeSpecName: "kube-api-access-jb9c7") pod "152d8240-ac76-4db8-b611-e1a3e62a91c6" (UID: "152d8240-ac76-4db8-b611-e1a3e62a91c6"). InnerVolumeSpecName "kube-api-access-jb9c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.762071 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883863aa-96fa-4ca9-b354-31239ab536cc-kube-api-access-nsz72" (OuterVolumeSpecName: "kube-api-access-nsz72") pod "883863aa-96fa-4ca9-b354-31239ab536cc" (UID: "883863aa-96fa-4ca9-b354-31239ab536cc"). InnerVolumeSpecName "kube-api-access-nsz72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.783402 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.789645 4637 generic.go:334] "Generic (PLEG): container finished" podID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" containerID="a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2" exitCode=0 Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.789709 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" event={"ID":"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d","Type":"ContainerDied","Data":"a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2"} Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.789743 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" event={"ID":"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d","Type":"ContainerDied","Data":"93d768b09977873400f490da25d586231d0be2e11ee57947190ef41f4ebbaeb1"} Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.789761 4637 scope.go:117] "RemoveContainer" containerID="a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.789912 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-6h6t8" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.813404 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-53ce-account-create-v7kbp" event={"ID":"152d8240-ac76-4db8-b611-e1a3e62a91c6","Type":"ContainerDied","Data":"8a8b4a8a319afec5656b9c7eb92dcee3143a88c5823afb11c0b86c12b085a24e"} Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.813448 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a8b4a8a319afec5656b9c7eb92dcee3143a88c5823afb11c0b86c12b085a24e" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.813522 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-53ce-account-create-v7kbp" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.838963 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4785-account-create-88s8h" event={"ID":"883863aa-96fa-4ca9-b354-31239ab536cc","Type":"ContainerDied","Data":"77797b871fbeecdd899a7d9695197d2bc0f1573ca965d83e7a0753f07a9a499a"} Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.839020 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77797b871fbeecdd899a7d9695197d2bc0f1573ca965d83e7a0753f07a9a499a" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.839128 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4785-account-create-88s8h" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.878798 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9f3a-account-create-thgql" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.878892 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9f3a-account-create-thgql" event={"ID":"d51c2122-e195-4811-b65a-677e548f80ea","Type":"ContainerDied","Data":"af47c644a45beb5f7341c7e3420f0a5e01048a012365eeab1447b44c430d999f"} Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.878915 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af47c644a45beb5f7341c7e3420f0a5e01048a012365eeab1447b44c430d999f" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.879260 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.879477 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerName="cinder-scheduler" containerID="cri-o://126d6d4b452460a27f05d362cdeb9d99a809ab298964b0e7d5c438f9873d5613" gracePeriod=30 Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.880163 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerName="probe" containerID="cri-o://3401204fbd1b19d46166e24e49ea04ab97befb9cbed0ec07bf31d03a157ce419" gracePeriod=30 Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.894221 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb9c7\" (UniqueName: \"kubernetes.io/projected/152d8240-ac76-4db8-b611-e1a3e62a91c6-kube-api-access-jb9c7\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.894783 4637 scope.go:117] "RemoveContainer" containerID="260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.898509 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsz72\" (UniqueName: \"kubernetes.io/projected/883863aa-96fa-4ca9-b354-31239ab536cc-kube-api-access-nsz72\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:26 crc kubenswrapper[4637]: I1201 15:05:26.997138 4637 scope.go:117] "RemoveContainer" containerID="a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2" Dec 01 15:05:27 crc kubenswrapper[4637]: E1201 15:05:27.008105 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2\": container with ID starting with a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2 not found: ID does not exist" containerID="a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.008166 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2"} err="failed to get container status \"a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2\": rpc error: code = NotFound desc = could not find container \"a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2\": container with ID starting with a92e2029a49e7e114e974ec936f388a4170b162d93183163203dd409162da6f2 not found: ID does not exist" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.008194 4637 scope.go:117] "RemoveContainer" containerID="260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775" Dec 01 15:05:27 crc kubenswrapper[4637]: E1201 15:05:27.010173 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775\": container with ID starting with 260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775 not found: ID does not exist" containerID="260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.010224 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775"} err="failed to get container status \"260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775\": rpc error: code = NotFound desc = could not find container \"260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775\": container with ID starting with 260c9841fa3ccd6c05ebfb0503091a1972efbd0841f35e026bcb259651989775 not found: ID does not exist" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.011049 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-config\") pod \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.011126 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78bkv\" (UniqueName: \"kubernetes.io/projected/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-kube-api-access-78bkv\") pod \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.011194 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-swift-storage-0\") pod \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.011361 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-nb\") pod \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.011498 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-svc\") pod \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.011545 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-sb\") pod \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\" (UID: \"1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d\") " Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.036163 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-kube-api-access-78bkv" (OuterVolumeSpecName: "kube-api-access-78bkv") pod "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" (UID: "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d"). InnerVolumeSpecName "kube-api-access-78bkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.117142 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78bkv\" (UniqueName: \"kubernetes.io/projected/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-kube-api-access-78bkv\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.124990 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" (UID: "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.129538 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" (UID: "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.146640 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" (UID: "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.151379 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" (UID: "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.158311 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-config" (OuterVolumeSpecName: "config") pod "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" (UID: "1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.219465 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.219511 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.219523 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.219533 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.219544 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.427870 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-6h6t8"] Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.441998 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-6h6t8"] Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.782011 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" path="/var/lib/kubelet/pods/1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d/volumes" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.893302 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"421d907a-c7b0-4109-8d01-e725459215b9","Type":"ContainerStarted","Data":"d1a45ce769f3eb336a435b02c0a51d0221bd7f91a59708c8ffcb3d22f73e849d"} Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.925406 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.840826234 podStartE2EDuration="41.925374981s" podCreationTimestamp="2025-12-01 15:04:46 +0000 UTC" firstStartedPulling="2025-12-01 15:04:50.32308284 +0000 UTC m=+1140.840791668" lastFinishedPulling="2025-12-01 15:05:26.407631587 +0000 UTC m=+1176.925340415" observedRunningTime="2025-12-01 15:05:27.918833363 +0000 UTC m=+1178.436542191" watchObservedRunningTime="2025-12-01 15:05:27.925374981 +0000 UTC m=+1178.443083809" Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.970123 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.970382 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="ceilometer-central-agent" containerID="cri-o://acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b" gracePeriod=30 Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.970592 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="proxy-httpd" containerID="cri-o://e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d" gracePeriod=30 Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.970756 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="sg-core" containerID="cri-o://248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa" gracePeriod=30 Dec 01 15:05:27 crc kubenswrapper[4637]: I1201 15:05:27.970810 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="ceilometer-notification-agent" containerID="cri-o://31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280" gracePeriod=30 Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.904529 4637 generic.go:334] "Generic (PLEG): container finished" podID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerID="3401204fbd1b19d46166e24e49ea04ab97befb9cbed0ec07bf31d03a157ce419" exitCode=0 Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.904797 4637 generic.go:334] "Generic (PLEG): container finished" podID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerID="126d6d4b452460a27f05d362cdeb9d99a809ab298964b0e7d5c438f9873d5613" exitCode=0 Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.904836 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a0b3365-7330-4f1b-a63d-f814cdc5ac62","Type":"ContainerDied","Data":"3401204fbd1b19d46166e24e49ea04ab97befb9cbed0ec07bf31d03a157ce419"} Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.904867 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a0b3365-7330-4f1b-a63d-f814cdc5ac62","Type":"ContainerDied","Data":"126d6d4b452460a27f05d362cdeb9d99a809ab298964b0e7d5c438f9873d5613"} Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.908721 4637 generic.go:334] "Generic (PLEG): container finished" podID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerID="e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d" exitCode=0 Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.908745 4637 generic.go:334] "Generic (PLEG): container finished" podID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerID="248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa" exitCode=2 Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.908755 4637 generic.go:334] "Generic (PLEG): container finished" podID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerID="31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280" exitCode=0 Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.908773 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerDied","Data":"e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d"} Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.908810 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerDied","Data":"248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa"} Dec 01 15:05:28 crc kubenswrapper[4637]: I1201 15:05:28.908822 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerDied","Data":"31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280"} Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.492612 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.565949 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-combined-ca-bundle\") pod \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.566298 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-etc-machine-id\") pod \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.566342 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data\") pod \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.566428 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1a0b3365-7330-4f1b-a63d-f814cdc5ac62" (UID: "1a0b3365-7330-4f1b-a63d-f814cdc5ac62"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.566466 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-scripts\") pod \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.566493 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data-custom\") pod \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.566529 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vbrb\" (UniqueName: \"kubernetes.io/projected/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-kube-api-access-2vbrb\") pod \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\" (UID: \"1a0b3365-7330-4f1b-a63d-f814cdc5ac62\") " Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.566996 4637 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.574051 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-scripts" (OuterVolumeSpecName: "scripts") pod "1a0b3365-7330-4f1b-a63d-f814cdc5ac62" (UID: "1a0b3365-7330-4f1b-a63d-f814cdc5ac62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.575267 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1a0b3365-7330-4f1b-a63d-f814cdc5ac62" (UID: "1a0b3365-7330-4f1b-a63d-f814cdc5ac62"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.606114 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-kube-api-access-2vbrb" (OuterVolumeSpecName: "kube-api-access-2vbrb") pod "1a0b3365-7330-4f1b-a63d-f814cdc5ac62" (UID: "1a0b3365-7330-4f1b-a63d-f814cdc5ac62"). InnerVolumeSpecName "kube-api-access-2vbrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.668999 4637 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.669031 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.669042 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vbrb\" (UniqueName: \"kubernetes.io/projected/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-kube-api-access-2vbrb\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.720051 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data" (OuterVolumeSpecName: "config-data") pod "1a0b3365-7330-4f1b-a63d-f814cdc5ac62" (UID: "1a0b3365-7330-4f1b-a63d-f814cdc5ac62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.753190 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a0b3365-7330-4f1b-a63d-f814cdc5ac62" (UID: "1a0b3365-7330-4f1b-a63d-f814cdc5ac62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.770403 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.770434 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0b3365-7330-4f1b-a63d-f814cdc5ac62-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.955195 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a0b3365-7330-4f1b-a63d-f814cdc5ac62","Type":"ContainerDied","Data":"428eeb7741a9a3427473ceec4d856e65694673ad652f3d21e1c6e80d41f299f6"} Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.955315 4637 scope.go:117] "RemoveContainer" containerID="3401204fbd1b19d46166e24e49ea04ab97befb9cbed0ec07bf31d03a157ce419" Dec 01 15:05:29 crc kubenswrapper[4637]: I1201 15:05:29.955596 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.000879 4637 scope.go:117] "RemoveContainer" containerID="126d6d4b452460a27f05d362cdeb9d99a809ab298964b0e7d5c438f9873d5613" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.016626 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.030119 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.042325 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:30 crc kubenswrapper[4637]: E1201 15:05:30.042775 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883863aa-96fa-4ca9-b354-31239ab536cc" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.042794 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="883863aa-96fa-4ca9-b354-31239ab536cc" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: E1201 15:05:30.042809 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152d8240-ac76-4db8-b611-e1a3e62a91c6" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.042815 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="152d8240-ac76-4db8-b611-e1a3e62a91c6" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: E1201 15:05:30.042841 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerName="cinder-scheduler" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.042850 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerName="cinder-scheduler" Dec 01 15:05:30 crc kubenswrapper[4637]: E1201 15:05:30.042859 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerName="probe" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.042865 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerName="probe" Dec 01 15:05:30 crc kubenswrapper[4637]: E1201 15:05:30.042873 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51c2122-e195-4811-b65a-677e548f80ea" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.042879 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51c2122-e195-4811-b65a-677e548f80ea" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: E1201 15:05:30.042894 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" containerName="dnsmasq-dns" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.042900 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" containerName="dnsmasq-dns" Dec 01 15:05:30 crc kubenswrapper[4637]: E1201 15:05:30.042917 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" containerName="init" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.042941 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" containerName="init" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.043095 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerName="probe" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.043109 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bda0b2e-3c55-4e3c-8ad1-1f8b631a2b3d" containerName="dnsmasq-dns" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.043118 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="152d8240-ac76-4db8-b611-e1a3e62a91c6" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.043150 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="883863aa-96fa-4ca9-b354-31239ab536cc" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.043163 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51c2122-e195-4811-b65a-677e548f80ea" containerName="mariadb-account-create" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.043174 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" containerName="cinder-scheduler" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.044222 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.046364 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.046520 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.077180 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hnlk\" (UniqueName: \"kubernetes.io/projected/a03f4cde-ebc1-46dd-9218-b3f073602fba-kube-api-access-8hnlk\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.077292 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-config-data\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.077379 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.077400 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.077472 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-scripts\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.077498 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a03f4cde-ebc1-46dd-9218-b3f073602fba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.180000 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.180066 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.180174 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-scripts\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.180214 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a03f4cde-ebc1-46dd-9218-b3f073602fba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.180315 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hnlk\" (UniqueName: \"kubernetes.io/projected/a03f4cde-ebc1-46dd-9218-b3f073602fba-kube-api-access-8hnlk\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.180433 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a03f4cde-ebc1-46dd-9218-b3f073602fba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.180981 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-config-data\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.185692 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.185912 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.186923 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-config-data\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.188446 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a03f4cde-ebc1-46dd-9218-b3f073602fba-scripts\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.201694 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hnlk\" (UniqueName: \"kubernetes.io/projected/a03f4cde-ebc1-46dd-9218-b3f073602fba-kube-api-access-8hnlk\") pod \"cinder-scheduler-0\" (UID: \"a03f4cde-ebc1-46dd-9218-b3f073602fba\") " pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.367191 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.904826 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 15:05:30 crc kubenswrapper[4637]: I1201 15:05:30.974391 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a03f4cde-ebc1-46dd-9218-b3f073602fba","Type":"ContainerStarted","Data":"978388ed02c8b860cade9e11aa6d93c01b3546c8dafba056b2cac56751fb8641"} Dec 01 15:05:31 crc kubenswrapper[4637]: I1201 15:05:31.789457 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0b3365-7330-4f1b-a63d-f814cdc5ac62" path="/var/lib/kubelet/pods/1a0b3365-7330-4f1b-a63d-f814cdc5ac62/volumes" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.020321 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a03f4cde-ebc1-46dd-9218-b3f073602fba","Type":"ContainerStarted","Data":"44bf8cfb2e2bb8b09306790fc94e17a8bf01677f63c49ad6b79a13691c9136b9"} Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.542371 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.654735 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-run-httpd\") pod \"c0d20541-3861-4d08-89d8-f62a23d65bb9\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.654817 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-combined-ca-bundle\") pod \"c0d20541-3861-4d08-89d8-f62a23d65bb9\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.654851 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-scripts\") pod \"c0d20541-3861-4d08-89d8-f62a23d65bb9\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.654973 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-log-httpd\") pod \"c0d20541-3861-4d08-89d8-f62a23d65bb9\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.655068 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvv2t\" (UniqueName: \"kubernetes.io/projected/c0d20541-3861-4d08-89d8-f62a23d65bb9-kube-api-access-tvv2t\") pod \"c0d20541-3861-4d08-89d8-f62a23d65bb9\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.655241 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-sg-core-conf-yaml\") pod \"c0d20541-3861-4d08-89d8-f62a23d65bb9\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.655331 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-config-data\") pod \"c0d20541-3861-4d08-89d8-f62a23d65bb9\" (UID: \"c0d20541-3861-4d08-89d8-f62a23d65bb9\") " Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.656978 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0d20541-3861-4d08-89d8-f62a23d65bb9" (UID: "c0d20541-3861-4d08-89d8-f62a23d65bb9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.657622 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0d20541-3861-4d08-89d8-f62a23d65bb9" (UID: "c0d20541-3861-4d08-89d8-f62a23d65bb9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.684743 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d20541-3861-4d08-89d8-f62a23d65bb9-kube-api-access-tvv2t" (OuterVolumeSpecName: "kube-api-access-tvv2t") pod "c0d20541-3861-4d08-89d8-f62a23d65bb9" (UID: "c0d20541-3861-4d08-89d8-f62a23d65bb9"). InnerVolumeSpecName "kube-api-access-tvv2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.726224 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ddhlw"] Dec 01 15:05:32 crc kubenswrapper[4637]: E1201 15:05:32.727402 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="ceilometer-central-agent" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.727424 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="ceilometer-central-agent" Dec 01 15:05:32 crc kubenswrapper[4637]: E1201 15:05:32.727448 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="sg-core" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.727456 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="sg-core" Dec 01 15:05:32 crc kubenswrapper[4637]: E1201 15:05:32.727504 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="proxy-httpd" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.727512 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="proxy-httpd" Dec 01 15:05:32 crc kubenswrapper[4637]: E1201 15:05:32.727548 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="ceilometer-notification-agent" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.727556 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="ceilometer-notification-agent" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.727852 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="ceilometer-central-agent" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.727894 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="sg-core" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.727904 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="proxy-httpd" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.727920 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerName="ceilometer-notification-agent" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.739038 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.743764 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ddhlw"] Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.747018 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4dqh8" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.747265 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.747426 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.762062 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-scripts" (OuterVolumeSpecName: "scripts") pod "c0d20541-3861-4d08-89d8-f62a23d65bb9" (UID: "c0d20541-3861-4d08-89d8-f62a23d65bb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.784549 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-config-data\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.784681 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.784716 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-scripts\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.784749 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb7z\" (UniqueName: \"kubernetes.io/projected/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-kube-api-access-scb7z\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.784854 4637 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.784867 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.784876 4637 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0d20541-3861-4d08-89d8-f62a23d65bb9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.784886 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvv2t\" (UniqueName: \"kubernetes.io/projected/c0d20541-3861-4d08-89d8-f62a23d65bb9-kube-api-access-tvv2t\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.830084 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0d20541-3861-4d08-89d8-f62a23d65bb9" (UID: "c0d20541-3861-4d08-89d8-f62a23d65bb9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.886551 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-config-data\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.886686 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.886730 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-scripts\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.886768 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scb7z\" (UniqueName: \"kubernetes.io/projected/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-kube-api-access-scb7z\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.886899 4637 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.900589 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.908765 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-config-data\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.912017 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-scripts\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.921036 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scb7z\" (UniqueName: \"kubernetes.io/projected/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-kube-api-access-scb7z\") pod \"nova-cell0-conductor-db-sync-ddhlw\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.956157 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0d20541-3861-4d08-89d8-f62a23d65bb9" (UID: "c0d20541-3861-4d08-89d8-f62a23d65bb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.981140 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.985219 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-config-data" (OuterVolumeSpecName: "config-data") pod "c0d20541-3861-4d08-89d8-f62a23d65bb9" (UID: "c0d20541-3861-4d08-89d8-f62a23d65bb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.989453 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:32 crc kubenswrapper[4637]: I1201 15:05:32.989491 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d20541-3861-4d08-89d8-f62a23d65bb9-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.055357 4637 generic.go:334] "Generic (PLEG): container finished" podID="c0d20541-3861-4d08-89d8-f62a23d65bb9" containerID="acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b" exitCode=0 Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.055492 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerDied","Data":"acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b"} Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.055531 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0d20541-3861-4d08-89d8-f62a23d65bb9","Type":"ContainerDied","Data":"57f3a48466200a21881efcbab4f0a9d33bb67342ef8592637e325fc569feb743"} Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.055553 4637 scope.go:117] "RemoveContainer" containerID="e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.055773 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.086291 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a03f4cde-ebc1-46dd-9218-b3f073602fba","Type":"ContainerStarted","Data":"a6bc5e43a58e584733d544edeceb207a3f1d3e45b322e2fdc05dc660c5be35ca"} Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.119733 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.152015 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.167419 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.170316 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.176014 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.176001254 podStartE2EDuration="4.176001254s" podCreationTimestamp="2025-12-01 15:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:05:33.160316399 +0000 UTC m=+1183.678025227" watchObservedRunningTime="2025-12-01 15:05:33.176001254 +0000 UTC m=+1183.693710082" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.180792 4637 scope.go:117] "RemoveContainer" containerID="248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.181439 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.181745 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.241081 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.299195 4637 scope.go:117] "RemoveContainer" containerID="31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.327595 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-scripts\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.327661 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.327760 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-log-httpd\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.327783 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.327822 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-config-data\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.327844 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntp4\" (UniqueName: \"kubernetes.io/projected/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-kube-api-access-zntp4\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.327866 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-run-httpd\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.352254 4637 scope.go:117] "RemoveContainer" containerID="acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.428529 4637 scope.go:117] "RemoveContainer" containerID="e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d" Dec 01 15:05:33 crc kubenswrapper[4637]: E1201 15:05:33.430391 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d\": container with ID starting with e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d not found: ID does not exist" containerID="e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430436 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d"} err="failed to get container status \"e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d\": rpc error: code = NotFound desc = could not find container \"e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d\": container with ID starting with e17dc1bf904f535d5a5d8e9dc8cad8539bb5daf54f25faa505bf158bf7f6d14d not found: ID does not exist" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430472 4637 scope.go:117] "RemoveContainer" containerID="248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430704 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-scripts\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430761 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: E1201 15:05:33.430796 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa\": container with ID starting with 248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa not found: ID does not exist" containerID="248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430823 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa"} err="failed to get container status \"248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa\": rpc error: code = NotFound desc = could not find container \"248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa\": container with ID starting with 248f5453c29fe0b943bd71b08fdc80d643302a3ef0fb4172b6eed343093a4efa not found: ID does not exist" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430837 4637 scope.go:117] "RemoveContainer" containerID="31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430880 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-log-httpd\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430959 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.430997 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-config-data\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.431018 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntp4\" (UniqueName: \"kubernetes.io/projected/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-kube-api-access-zntp4\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.431067 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-run-httpd\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.431675 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-run-httpd\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.432630 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-log-httpd\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: E1201 15:05:33.435492 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280\": container with ID starting with 31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280 not found: ID does not exist" containerID="31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.435566 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280"} err="failed to get container status \"31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280\": rpc error: code = NotFound desc = could not find container \"31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280\": container with ID starting with 31aff1cf258bc9d0e2bbb5c6cb4b4fab8ccb978413181d64a2e7c812bc2a6280 not found: ID does not exist" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.435606 4637 scope.go:117] "RemoveContainer" containerID="acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b" Dec 01 15:05:33 crc kubenswrapper[4637]: E1201 15:05:33.437564 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b\": container with ID starting with acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b not found: ID does not exist" containerID="acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.437632 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b"} err="failed to get container status \"acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b\": rpc error: code = NotFound desc = could not find container \"acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b\": container with ID starting with acf800f5e1529efe64418ce0978897bb7f78de9c38a5771ebc1db0668c6f9d3b not found: ID does not exist" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.445663 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-config-data\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.446980 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.450678 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-scripts\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.454297 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.459295 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntp4\" (UniqueName: \"kubernetes.io/projected/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-kube-api-access-zntp4\") pod \"ceilometer-0\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.549230 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.680567 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ddhlw"] Dec 01 15:05:33 crc kubenswrapper[4637]: W1201 15:05:33.688998 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525ed4f2_f6a1_436b_8119_cf3fc620c6a7.slice/crio-ecb7a32c4a614e23a2f79d299044f7ed2974b17d5197c1e55b861d24a19a10d8 WatchSource:0}: Error finding container ecb7a32c4a614e23a2f79d299044f7ed2974b17d5197c1e55b861d24a19a10d8: Status 404 returned error can't find the container with id ecb7a32c4a614e23a2f79d299044f7ed2974b17d5197c1e55b861d24a19a10d8 Dec 01 15:05:33 crc kubenswrapper[4637]: I1201 15:05:33.791243 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d20541-3861-4d08-89d8-f62a23d65bb9" path="/var/lib/kubelet/pods/c0d20541-3861-4d08-89d8-f62a23d65bb9/volumes" Dec 01 15:05:34 crc kubenswrapper[4637]: I1201 15:05:34.098395 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" event={"ID":"525ed4f2-f6a1-436b-8119-cf3fc620c6a7","Type":"ContainerStarted","Data":"ecb7a32c4a614e23a2f79d299044f7ed2974b17d5197c1e55b861d24a19a10d8"} Dec 01 15:05:34 crc kubenswrapper[4637]: I1201 15:05:34.151638 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:35 crc kubenswrapper[4637]: I1201 15:05:35.097446 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 01 15:05:35 crc kubenswrapper[4637]: I1201 15:05:35.109172 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerStarted","Data":"bdf347d667475e856484b76f288d57dbddb3cdfdcc449383d9476aa97b9d48e6"} Dec 01 15:05:35 crc kubenswrapper[4637]: I1201 15:05:35.367859 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 15:05:37 crc kubenswrapper[4637]: I1201 15:05:37.131880 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerStarted","Data":"1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433"} Dec 01 15:05:37 crc kubenswrapper[4637]: I1201 15:05:37.132510 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerStarted","Data":"13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337"} Dec 01 15:05:40 crc kubenswrapper[4637]: I1201 15:05:40.638519 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 15:05:43 crc kubenswrapper[4637]: I1201 15:05:43.096017 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:46 crc kubenswrapper[4637]: I1201 15:05:46.155424 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:05:46 crc kubenswrapper[4637]: I1201 15:05:46.156494 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" containerName="glance-log" containerID="cri-o://a03150ec6f1cb47f96ac044d3d9982eebaa64693b5dfeccf4951fba1b53460c7" gracePeriod=30 Dec 01 15:05:46 crc kubenswrapper[4637]: I1201 15:05:46.156973 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" containerName="glance-httpd" containerID="cri-o://7420952e58ffa4feeaf35d2110a689b2890a445f6888c993aa4aadf90b4e98f4" gracePeriod=30 Dec 01 15:05:47 crc kubenswrapper[4637]: I1201 15:05:47.233633 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:05:47 crc kubenswrapper[4637]: I1201 15:05:47.234172 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerName="glance-log" containerID="cri-o://a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6" gracePeriod=30 Dec 01 15:05:47 crc kubenswrapper[4637]: I1201 15:05:47.234610 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerName="glance-httpd" containerID="cri-o://f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8" gracePeriod=30 Dec 01 15:05:47 crc kubenswrapper[4637]: I1201 15:05:47.258756 4637 generic.go:334] "Generic (PLEG): container finished" podID="94132f58-a470-4b02-acc0-f59d994e07ea" containerID="a03150ec6f1cb47f96ac044d3d9982eebaa64693b5dfeccf4951fba1b53460c7" exitCode=143 Dec 01 15:05:47 crc kubenswrapper[4637]: I1201 15:05:47.258872 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94132f58-a470-4b02-acc0-f59d994e07ea","Type":"ContainerDied","Data":"a03150ec6f1cb47f96ac044d3d9982eebaa64693b5dfeccf4951fba1b53460c7"} Dec 01 15:05:47 crc kubenswrapper[4637]: I1201 15:05:47.260810 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" event={"ID":"525ed4f2-f6a1-436b-8119-cf3fc620c6a7","Type":"ContainerStarted","Data":"4c0b37344f846e257e75801a2714dd437589f9bce749321dd90ef3d8d6e40aa7"} Dec 01 15:05:47 crc kubenswrapper[4637]: I1201 15:05:47.267056 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerStarted","Data":"964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff"} Dec 01 15:05:47 crc kubenswrapper[4637]: I1201 15:05:47.308650 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" podStartSLOduration=2.899520157 podStartE2EDuration="15.308622985s" podCreationTimestamp="2025-12-01 15:05:32 +0000 UTC" firstStartedPulling="2025-12-01 15:05:33.691424413 +0000 UTC m=+1184.209133241" lastFinishedPulling="2025-12-01 15:05:46.100527251 +0000 UTC m=+1196.618236069" observedRunningTime="2025-12-01 15:05:47.307719081 +0000 UTC m=+1197.825427909" watchObservedRunningTime="2025-12-01 15:05:47.308622985 +0000 UTC m=+1197.826331813" Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.279881 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerStarted","Data":"6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad"} Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.280307 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.280085 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="sg-core" containerID="cri-o://964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff" gracePeriod=30 Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.280061 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="ceilometer-central-agent" containerID="cri-o://13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337" gracePeriod=30 Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.280097 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="ceilometer-notification-agent" containerID="cri-o://1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433" gracePeriod=30 Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.280150 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="proxy-httpd" containerID="cri-o://6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad" gracePeriod=30 Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.282941 4637 generic.go:334] "Generic (PLEG): container finished" podID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerID="a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6" exitCode=143 Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.283156 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db43f411-7028-4fde-ac84-bc4b00053f4f","Type":"ContainerDied","Data":"a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6"} Dec 01 15:05:48 crc kubenswrapper[4637]: I1201 15:05:48.315866 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.704671763 podStartE2EDuration="15.315848963s" podCreationTimestamp="2025-12-01 15:05:33 +0000 UTC" firstStartedPulling="2025-12-01 15:05:34.148175483 +0000 UTC m=+1184.665884301" lastFinishedPulling="2025-12-01 15:05:47.759352673 +0000 UTC m=+1198.277061501" observedRunningTime="2025-12-01 15:05:48.312770949 +0000 UTC m=+1198.830479777" watchObservedRunningTime="2025-12-01 15:05:48.315848963 +0000 UTC m=+1198.833557791" Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.295833 4637 generic.go:334] "Generic (PLEG): container finished" podID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerID="6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad" exitCode=0 Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.296345 4637 generic.go:334] "Generic (PLEG): container finished" podID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerID="964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff" exitCode=2 Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.296357 4637 generic.go:334] "Generic (PLEG): container finished" podID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerID="13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337" exitCode=0 Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.295944 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerDied","Data":"6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad"} Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.296409 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerDied","Data":"964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff"} Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.296429 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerDied","Data":"13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337"} Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.812089 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.923437 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-sg-core-conf-yaml\") pod \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.923924 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-combined-ca-bundle\") pod \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.924043 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-scripts\") pod \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.924150 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-run-httpd\") pod \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.924247 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-config-data\") pod \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.924316 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-log-httpd\") pod \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.924381 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zntp4\" (UniqueName: \"kubernetes.io/projected/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-kube-api-access-zntp4\") pod \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\" (UID: \"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975\") " Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.940990 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" (UID: "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.943355 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" (UID: "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.965630 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-kube-api-access-zntp4" (OuterVolumeSpecName: "kube-api-access-zntp4") pod "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" (UID: "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975"). InnerVolumeSpecName "kube-api-access-zntp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:49 crc kubenswrapper[4637]: I1201 15:05:49.986095 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-scripts" (OuterVolumeSpecName: "scripts") pod "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" (UID: "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.003962 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" (UID: "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.027773 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.027820 4637 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.027833 4637 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.027845 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zntp4\" (UniqueName: \"kubernetes.io/projected/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-kube-api-access-zntp4\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.027857 4637 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.147301 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" (UID: "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.179233 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-config-data" (OuterVolumeSpecName: "config-data") pod "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" (UID: "e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.235842 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.235882 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.340149 4637 generic.go:334] "Generic (PLEG): container finished" podID="94132f58-a470-4b02-acc0-f59d994e07ea" containerID="7420952e58ffa4feeaf35d2110a689b2890a445f6888c993aa4aadf90b4e98f4" exitCode=0 Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.340226 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94132f58-a470-4b02-acc0-f59d994e07ea","Type":"ContainerDied","Data":"7420952e58ffa4feeaf35d2110a689b2890a445f6888c993aa4aadf90b4e98f4"} Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.344169 4637 generic.go:334] "Generic (PLEG): container finished" podID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerID="1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433" exitCode=0 Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.344210 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerDied","Data":"1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433"} Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.344238 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975","Type":"ContainerDied","Data":"bdf347d667475e856484b76f288d57dbddb3cdfdcc449383d9476aa97b9d48e6"} Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.344257 4637 scope.go:117] "RemoveContainer" containerID="6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.344375 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.382786 4637 scope.go:117] "RemoveContainer" containerID="964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.410248 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.450469 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.469563 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:50 crc kubenswrapper[4637]: E1201 15:05:50.470055 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="sg-core" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.470086 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="sg-core" Dec 01 15:05:50 crc kubenswrapper[4637]: E1201 15:05:50.470123 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="ceilometer-notification-agent" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.470132 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="ceilometer-notification-agent" Dec 01 15:05:50 crc kubenswrapper[4637]: E1201 15:05:50.470151 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="ceilometer-central-agent" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.470164 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="ceilometer-central-agent" Dec 01 15:05:50 crc kubenswrapper[4637]: E1201 15:05:50.470185 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="proxy-httpd" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.470195 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="proxy-httpd" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.470375 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="proxy-httpd" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.470393 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="ceilometer-central-agent" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.470408 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="ceilometer-notification-agent" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.470423 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" containerName="sg-core" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.472076 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.477784 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.477968 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.478001 4637 scope.go:117] "RemoveContainer" containerID="1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.485825 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.519875 4637 scope.go:117] "RemoveContainer" containerID="13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.553191 4637 scope.go:117] "RemoveContainer" containerID="6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad" Dec 01 15:05:50 crc kubenswrapper[4637]: E1201 15:05:50.556974 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad\": container with ID starting with 6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad not found: ID does not exist" containerID="6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.557088 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad"} err="failed to get container status \"6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad\": rpc error: code = NotFound desc = could not find container \"6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad\": container with ID starting with 6803ef8f52c7e255b500778dfa1925819eab711a2d8b0c0afe555ef743916fad not found: ID does not exist" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.557122 4637 scope.go:117] "RemoveContainer" containerID="964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff" Dec 01 15:05:50 crc kubenswrapper[4637]: E1201 15:05:50.557451 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff\": container with ID starting with 964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff not found: ID does not exist" containerID="964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.557482 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff"} err="failed to get container status \"964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff\": rpc error: code = NotFound desc = could not find container \"964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff\": container with ID starting with 964b6e6d258653a4b63af793eb667e309819b5cefd78c816ac2c1825e96498ff not found: ID does not exist" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.557499 4637 scope.go:117] "RemoveContainer" containerID="1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433" Dec 01 15:05:50 crc kubenswrapper[4637]: E1201 15:05:50.557981 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433\": container with ID starting with 1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433 not found: ID does not exist" containerID="1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.561040 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433"} err="failed to get container status \"1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433\": rpc error: code = NotFound desc = could not find container \"1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433\": container with ID starting with 1cfa446a0484d0c4a717b7bf32cf89d3bae9b4c33ae87690d609de026a24f433 not found: ID does not exist" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.561071 4637 scope.go:117] "RemoveContainer" containerID="13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337" Dec 01 15:05:50 crc kubenswrapper[4637]: E1201 15:05:50.561307 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337\": container with ID starting with 13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337 not found: ID does not exist" containerID="13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.561322 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337"} err="failed to get container status \"13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337\": rpc error: code = NotFound desc = could not find container \"13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337\": container with ID starting with 13f5431e29a7086b496104f88e98cc217bcf5fa065f5b65f4447912421b22337 not found: ID does not exist" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.575423 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.644362 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-run-httpd\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.644423 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-config-data\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.644452 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.644517 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qt4\" (UniqueName: \"kubernetes.io/projected/64564820-1434-4fa4-909a-7a9310f55f81-kube-api-access-g2qt4\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.644551 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-log-httpd\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.644568 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.644608 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-scripts\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.745868 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-scripts\") pod \"94132f58-a470-4b02-acc0-f59d994e07ea\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.745983 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"94132f58-a470-4b02-acc0-f59d994e07ea\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746032 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-combined-ca-bundle\") pod \"94132f58-a470-4b02-acc0-f59d994e07ea\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746074 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-logs\") pod \"94132f58-a470-4b02-acc0-f59d994e07ea\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746149 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjlhp\" (UniqueName: \"kubernetes.io/projected/94132f58-a470-4b02-acc0-f59d994e07ea-kube-api-access-tjlhp\") pod \"94132f58-a470-4b02-acc0-f59d994e07ea\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746215 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-httpd-run\") pod \"94132f58-a470-4b02-acc0-f59d994e07ea\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746264 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-config-data\") pod \"94132f58-a470-4b02-acc0-f59d994e07ea\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746283 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-public-tls-certs\") pod \"94132f58-a470-4b02-acc0-f59d994e07ea\" (UID: \"94132f58-a470-4b02-acc0-f59d994e07ea\") " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746573 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-run-httpd\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746602 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-config-data\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746626 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746658 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2qt4\" (UniqueName: \"kubernetes.io/projected/64564820-1434-4fa4-909a-7a9310f55f81-kube-api-access-g2qt4\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746693 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-log-httpd\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746711 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.746750 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-scripts\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.747198 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-logs" (OuterVolumeSpecName: "logs") pod "94132f58-a470-4b02-acc0-f59d994e07ea" (UID: "94132f58-a470-4b02-acc0-f59d994e07ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.748355 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-run-httpd\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.758197 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94132f58-a470-4b02-acc0-f59d994e07ea" (UID: "94132f58-a470-4b02-acc0-f59d994e07ea"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.758564 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-log-httpd\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.760631 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94132f58-a470-4b02-acc0-f59d994e07ea-kube-api-access-tjlhp" (OuterVolumeSpecName: "kube-api-access-tjlhp") pod "94132f58-a470-4b02-acc0-f59d994e07ea" (UID: "94132f58-a470-4b02-acc0-f59d994e07ea"). InnerVolumeSpecName "kube-api-access-tjlhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.760973 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-config-data\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.763072 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-scripts\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.763483 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "94132f58-a470-4b02-acc0-f59d994e07ea" (UID: "94132f58-a470-4b02-acc0-f59d994e07ea"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.778424 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.785175 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-scripts" (OuterVolumeSpecName: "scripts") pod "94132f58-a470-4b02-acc0-f59d994e07ea" (UID: "94132f58-a470-4b02-acc0-f59d994e07ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.797632 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.819273 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2qt4\" (UniqueName: \"kubernetes.io/projected/64564820-1434-4fa4-909a-7a9310f55f81-kube-api-access-g2qt4\") pod \"ceilometer-0\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " pod="openstack/ceilometer-0" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.831805 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94132f58-a470-4b02-acc0-f59d994e07ea" (UID: "94132f58-a470-4b02-acc0-f59d994e07ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.851747 4637 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.851785 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.851821 4637 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.851838 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.851851 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94132f58-a470-4b02-acc0-f59d994e07ea-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.851863 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjlhp\" (UniqueName: \"kubernetes.io/projected/94132f58-a470-4b02-acc0-f59d994e07ea-kube-api-access-tjlhp\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.903021 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94132f58-a470-4b02-acc0-f59d994e07ea" (UID: "94132f58-a470-4b02-acc0-f59d994e07ea"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.903450 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-config-data" (OuterVolumeSpecName: "config-data") pod "94132f58-a470-4b02-acc0-f59d994e07ea" (UID: "94132f58-a470-4b02-acc0-f59d994e07ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.907751 4637 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.954151 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.954202 4637 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94132f58-a470-4b02-acc0-f59d994e07ea-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:50 crc kubenswrapper[4637]: I1201 15:05:50.954220 4637 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.101570 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.264323 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.361598 4637 generic.go:334] "Generic (PLEG): container finished" podID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerID="f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8" exitCode=0 Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.361661 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db43f411-7028-4fde-ac84-bc4b00053f4f","Type":"ContainerDied","Data":"f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8"} Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.361697 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db43f411-7028-4fde-ac84-bc4b00053f4f","Type":"ContainerDied","Data":"4cd29e1699bd8a1e6dbf772a46b97ab11c6b1d78f3b7117f343c52b52ab76ef9"} Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.361720 4637 scope.go:117] "RemoveContainer" containerID="f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.361883 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.368576 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-internal-tls-certs\") pod \"db43f411-7028-4fde-ac84-bc4b00053f4f\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.368642 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-logs\") pod \"db43f411-7028-4fde-ac84-bc4b00053f4f\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.368730 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-httpd-run\") pod \"db43f411-7028-4fde-ac84-bc4b00053f4f\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.368751 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-combined-ca-bundle\") pod \"db43f411-7028-4fde-ac84-bc4b00053f4f\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.368790 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94sdm\" (UniqueName: \"kubernetes.io/projected/db43f411-7028-4fde-ac84-bc4b00053f4f-kube-api-access-94sdm\") pod \"db43f411-7028-4fde-ac84-bc4b00053f4f\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.368812 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-scripts\") pod \"db43f411-7028-4fde-ac84-bc4b00053f4f\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.368857 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"db43f411-7028-4fde-ac84-bc4b00053f4f\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.368999 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-config-data\") pod \"db43f411-7028-4fde-ac84-bc4b00053f4f\" (UID: \"db43f411-7028-4fde-ac84-bc4b00053f4f\") " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.370029 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94132f58-a470-4b02-acc0-f59d994e07ea","Type":"ContainerDied","Data":"5d2e7ac10ba69658226b6d05d0796f194980156be1731ee7eb474f8836bf3c2f"} Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.370185 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.370643 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-logs" (OuterVolumeSpecName: "logs") pod "db43f411-7028-4fde-ac84-bc4b00053f4f" (UID: "db43f411-7028-4fde-ac84-bc4b00053f4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.371692 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "db43f411-7028-4fde-ac84-bc4b00053f4f" (UID: "db43f411-7028-4fde-ac84-bc4b00053f4f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.385536 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db43f411-7028-4fde-ac84-bc4b00053f4f-kube-api-access-94sdm" (OuterVolumeSpecName: "kube-api-access-94sdm") pod "db43f411-7028-4fde-ac84-bc4b00053f4f" (UID: "db43f411-7028-4fde-ac84-bc4b00053f4f"). InnerVolumeSpecName "kube-api-access-94sdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.405628 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-scripts" (OuterVolumeSpecName: "scripts") pod "db43f411-7028-4fde-ac84-bc4b00053f4f" (UID: "db43f411-7028-4fde-ac84-bc4b00053f4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.405790 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "db43f411-7028-4fde-ac84-bc4b00053f4f" (UID: "db43f411-7028-4fde-ac84-bc4b00053f4f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.416990 4637 scope.go:117] "RemoveContainer" containerID="a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.460575 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.470531 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db43f411-7028-4fde-ac84-bc4b00053f4f" (UID: "db43f411-7028-4fde-ac84-bc4b00053f4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.470808 4637 scope.go:117] "RemoveContainer" containerID="f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8" Dec 01 15:05:51 crc kubenswrapper[4637]: E1201 15:05:51.471903 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8\": container with ID starting with f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8 not found: ID does not exist" containerID="f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.472031 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8"} err="failed to get container status \"f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8\": rpc error: code = NotFound desc = could not find container \"f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8\": container with ID starting with f96a7982b3d45d73936fcbe8a94cfe863f3e0e26c8ba79b687eeceaee7bde7d8 not found: ID does not exist" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.472075 4637 scope.go:117] "RemoveContainer" containerID="a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6" Dec 01 15:05:51 crc kubenswrapper[4637]: E1201 15:05:51.474145 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6\": container with ID starting with a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6 not found: ID does not exist" containerID="a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.474234 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6"} err="failed to get container status \"a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6\": rpc error: code = NotFound desc = could not find container \"a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6\": container with ID starting with a12cb5f75cb97643ef287b5f85a9c45138d146a160818156652a4785283b56f6 not found: ID does not exist" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.474274 4637 scope.go:117] "RemoveContainer" containerID="7420952e58ffa4feeaf35d2110a689b2890a445f6888c993aa4aadf90b4e98f4" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.498740 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.498783 4637 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db43f411-7028-4fde-ac84-bc4b00053f4f-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.498795 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.498808 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94sdm\" (UniqueName: \"kubernetes.io/projected/db43f411-7028-4fde-ac84-bc4b00053f4f-kube-api-access-94sdm\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.498819 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.498848 4637 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.526564 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.539160 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db43f411-7028-4fde-ac84-bc4b00053f4f" (UID: "db43f411-7028-4fde-ac84-bc4b00053f4f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.558470 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.559921 4637 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 01 15:05:51 crc kubenswrapper[4637]: E1201 15:05:51.562234 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerName="glance-log" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.562364 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerName="glance-log" Dec 01 15:05:51 crc kubenswrapper[4637]: E1201 15:05:51.562472 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerName="glance-httpd" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.562534 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerName="glance-httpd" Dec 01 15:05:51 crc kubenswrapper[4637]: E1201 15:05:51.562612 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" containerName="glance-log" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.562667 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" containerName="glance-log" Dec 01 15:05:51 crc kubenswrapper[4637]: E1201 15:05:51.562742 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" containerName="glance-httpd" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.562792 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" containerName="glance-httpd" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.563101 4637 scope.go:117] "RemoveContainer" containerID="a03150ec6f1cb47f96ac044d3d9982eebaa64693b5dfeccf4951fba1b53460c7" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.563343 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" containerName="glance-httpd" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.563434 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerName="glance-log" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.563537 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" containerName="glance-log" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.563622 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" containerName="glance-httpd" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.565408 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.571521 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.572827 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.583337 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-config-data" (OuterVolumeSpecName: "config-data") pod "db43f411-7028-4fde-ac84-bc4b00053f4f" (UID: "db43f411-7028-4fde-ac84-bc4b00053f4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.602879 4637 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.602918 4637 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.602958 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43f411-7028-4fde-ac84-bc4b00053f4f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.619913 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.650425 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.704312 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-scripts\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.705791 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.705885 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.706035 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc56d\" (UniqueName: \"kubernetes.io/projected/11dea63e-843a-4a51-9525-4cda961c167a-kube-api-access-sc56d\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.706240 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11dea63e-843a-4a51-9525-4cda961c167a-logs\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.707343 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-config-data\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.707465 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.707542 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11dea63e-843a-4a51-9525-4cda961c167a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.766770 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.791289 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94132f58-a470-4b02-acc0-f59d994e07ea" path="/var/lib/kubelet/pods/94132f58-a470-4b02-acc0-f59d994e07ea/volumes" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.799556 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975" path="/var/lib/kubelet/pods/e3a6a3ec-2d0c-4a35-bed6-9caf7a2c8975/volumes" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.800945 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.809527 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-config-data\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.809602 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.809632 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11dea63e-843a-4a51-9525-4cda961c167a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.809668 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-scripts\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.809702 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.809732 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.809764 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc56d\" (UniqueName: \"kubernetes.io/projected/11dea63e-843a-4a51-9525-4cda961c167a-kube-api-access-sc56d\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.809814 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11dea63e-843a-4a51-9525-4cda961c167a-logs\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.811307 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11dea63e-843a-4a51-9525-4cda961c167a-logs\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.811564 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11dea63e-843a-4a51-9525-4cda961c167a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.813300 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.814819 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.816943 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.825305 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.827323 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.844188 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.860803 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.896781 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.913206 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-scripts\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.914084 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc56d\" (UniqueName: \"kubernetes.io/projected/11dea63e-843a-4a51-9525-4cda961c167a-kube-api-access-sc56d\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.914545 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11dea63e-843a-4a51-9525-4cda961c167a-config-data\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:51 crc kubenswrapper[4637]: I1201 15:05:51.918267 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"11dea63e-843a-4a51-9525-4cda961c167a\") " pod="openstack/glance-default-external-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.017755 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.018207 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.018466 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00582d1a-8f52-49ad-9adc-306f07c46255-logs\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.018631 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.018689 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.018897 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlx2s\" (UniqueName: \"kubernetes.io/projected/00582d1a-8f52-49ad-9adc-306f07c46255-kube-api-access-xlx2s\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.019058 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00582d1a-8f52-49ad-9adc-306f07c46255-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.019088 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: E1201 15:05:52.046376 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb43f411_7028_4fde_ac84_bc4b00053f4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb43f411_7028_4fde_ac84_bc4b00053f4f.slice/crio-4cd29e1699bd8a1e6dbf772a46b97ab11c6b1d78f3b7117f343c52b52ab76ef9\": RecentStats: unable to find data in memory cache]" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.120592 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.120643 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.120703 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlx2s\" (UniqueName: \"kubernetes.io/projected/00582d1a-8f52-49ad-9adc-306f07c46255-kube-api-access-xlx2s\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.120746 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00582d1a-8f52-49ad-9adc-306f07c46255-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.120770 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.120814 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.120835 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.120892 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00582d1a-8f52-49ad-9adc-306f07c46255-logs\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.121278 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.121797 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00582d1a-8f52-49ad-9adc-306f07c46255-logs\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.121759 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00582d1a-8f52-49ad-9adc-306f07c46255-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.129514 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.135514 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.136145 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.141526 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00582d1a-8f52-49ad-9adc-306f07c46255-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.148913 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlx2s\" (UniqueName: \"kubernetes.io/projected/00582d1a-8f52-49ad-9adc-306f07c46255-kube-api-access-xlx2s\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.171068 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"00582d1a-8f52-49ad-9adc-306f07c46255\") " pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.204206 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.301592 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.427636 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerStarted","Data":"d51e92a9169d8fdf5960abe828ddc1e7098be1f29dfded6cb83701e1234fb30d"} Dec 01 15:05:52 crc kubenswrapper[4637]: I1201 15:05:52.806775 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 15:05:53 crc kubenswrapper[4637]: I1201 15:05:53.065270 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 15:05:53 crc kubenswrapper[4637]: I1201 15:05:53.500504 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerStarted","Data":"6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd"} Dec 01 15:05:53 crc kubenswrapper[4637]: I1201 15:05:53.506218 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"11dea63e-843a-4a51-9525-4cda961c167a","Type":"ContainerStarted","Data":"dbe99847970b427fb640f1dfaa25dc390d1d732ae7d1b0370a6236e81470ac27"} Dec 01 15:05:53 crc kubenswrapper[4637]: I1201 15:05:53.512434 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00582d1a-8f52-49ad-9adc-306f07c46255","Type":"ContainerStarted","Data":"e13c5731dc8b388725235742ba294a500f74c12ba5f772972f0b3345346d9be2"} Dec 01 15:05:53 crc kubenswrapper[4637]: I1201 15:05:53.781975 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db43f411-7028-4fde-ac84-bc4b00053f4f" path="/var/lib/kubelet/pods/db43f411-7028-4fde-ac84-bc4b00053f4f/volumes" Dec 01 15:05:54 crc kubenswrapper[4637]: I1201 15:05:54.557479 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerStarted","Data":"91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09"} Dec 01 15:05:54 crc kubenswrapper[4637]: I1201 15:05:54.563515 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"11dea63e-843a-4a51-9525-4cda961c167a","Type":"ContainerStarted","Data":"a28db92faadc6e32e793cf0c4f10a2e5c572656912d46ee656bbbed39cfd2d74"} Dec 01 15:05:54 crc kubenswrapper[4637]: I1201 15:05:54.577373 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00582d1a-8f52-49ad-9adc-306f07c46255","Type":"ContainerStarted","Data":"11ab8aa4cfe8171efcf9958090cb10fa340a1f8edff21b7bb7970ed4a9090265"} Dec 01 15:05:55 crc kubenswrapper[4637]: I1201 15:05:55.587232 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"11dea63e-843a-4a51-9525-4cda961c167a","Type":"ContainerStarted","Data":"4b4da500b5ad5c7c1409b85c9bfcf5c65c620d4363729f032e226296e33522ee"} Dec 01 15:05:55 crc kubenswrapper[4637]: I1201 15:05:55.589794 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00582d1a-8f52-49ad-9adc-306f07c46255","Type":"ContainerStarted","Data":"cb3bdce8bcadfae0bca01cb06429cf8bc2d85412f722662051e10ef3d8ae1498"} Dec 01 15:05:55 crc kubenswrapper[4637]: I1201 15:05:55.592331 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerStarted","Data":"82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f"} Dec 01 15:05:55 crc kubenswrapper[4637]: I1201 15:05:55.615348 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.615331954 podStartE2EDuration="4.615331954s" podCreationTimestamp="2025-12-01 15:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:05:55.611623303 +0000 UTC m=+1206.129332131" watchObservedRunningTime="2025-12-01 15:05:55.615331954 +0000 UTC m=+1206.133040782" Dec 01 15:05:55 crc kubenswrapper[4637]: I1201 15:05:55.646786 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.6467656139999995 podStartE2EDuration="4.646765614s" podCreationTimestamp="2025-12-01 15:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:05:55.639387944 +0000 UTC m=+1206.157096772" watchObservedRunningTime="2025-12-01 15:05:55.646765614 +0000 UTC m=+1206.164474442" Dec 01 15:05:59 crc kubenswrapper[4637]: I1201 15:05:59.632139 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerStarted","Data":"2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502"} Dec 01 15:05:59 crc kubenswrapper[4637]: I1201 15:05:59.633694 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:05:59 crc kubenswrapper[4637]: I1201 15:05:59.669555 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.779612333 podStartE2EDuration="9.66952536s" podCreationTimestamp="2025-12-01 15:05:50 +0000 UTC" firstStartedPulling="2025-12-01 15:05:51.673678754 +0000 UTC m=+1202.191387582" lastFinishedPulling="2025-12-01 15:05:58.563591781 +0000 UTC m=+1209.081300609" observedRunningTime="2025-12-01 15:05:59.657383791 +0000 UTC m=+1210.175092609" watchObservedRunningTime="2025-12-01 15:05:59.66952536 +0000 UTC m=+1210.187234188" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.204509 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.205279 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.248012 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.250393 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.302599 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.302686 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: E1201 15:06:02.321551 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525ed4f2_f6a1_436b_8119_cf3fc620c6a7.slice/crio-conmon-4c0b37344f846e257e75801a2714dd437589f9bce749321dd90ef3d8d6e40aa7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525ed4f2_f6a1_436b_8119_cf3fc620c6a7.slice/crio-4c0b37344f846e257e75801a2714dd437589f9bce749321dd90ef3d8d6e40aa7.scope\": RecentStats: unable to find data in memory cache]" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.346334 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.394999 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.618609 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.663915 4637 generic.go:334] "Generic (PLEG): container finished" podID="525ed4f2-f6a1-436b-8119-cf3fc620c6a7" containerID="4c0b37344f846e257e75801a2714dd437589f9bce749321dd90ef3d8d6e40aa7" exitCode=0 Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.666512 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" event={"ID":"525ed4f2-f6a1-436b-8119-cf3fc620c6a7","Type":"ContainerDied","Data":"4c0b37344f846e257e75801a2714dd437589f9bce749321dd90ef3d8d6e40aa7"} Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.666558 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.667217 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="ceilometer-central-agent" containerID="cri-o://6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd" gracePeriod=30 Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.669369 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="proxy-httpd" containerID="cri-o://2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502" gracePeriod=30 Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.669444 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.669519 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.669445 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="ceilometer-notification-agent" containerID="cri-o://91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09" gracePeriod=30 Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.669645 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 15:06:02 crc kubenswrapper[4637]: I1201 15:06:02.669364 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="sg-core" containerID="cri-o://82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f" gracePeriod=30 Dec 01 15:06:03 crc kubenswrapper[4637]: I1201 15:06:03.676111 4637 generic.go:334] "Generic (PLEG): container finished" podID="64564820-1434-4fa4-909a-7a9310f55f81" containerID="2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502" exitCode=0 Dec 01 15:06:03 crc kubenswrapper[4637]: I1201 15:06:03.677774 4637 generic.go:334] "Generic (PLEG): container finished" podID="64564820-1434-4fa4-909a-7a9310f55f81" containerID="82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f" exitCode=2 Dec 01 15:06:03 crc kubenswrapper[4637]: I1201 15:06:03.677898 4637 generic.go:334] "Generic (PLEG): container finished" podID="64564820-1434-4fa4-909a-7a9310f55f81" containerID="91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09" exitCode=0 Dec 01 15:06:03 crc kubenswrapper[4637]: I1201 15:06:03.676184 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerDied","Data":"2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502"} Dec 01 15:06:03 crc kubenswrapper[4637]: I1201 15:06:03.678158 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerDied","Data":"82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f"} Dec 01 15:06:03 crc kubenswrapper[4637]: I1201 15:06:03.678183 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerDied","Data":"91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09"} Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.054897 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.124551 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-config-data\") pod \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.124698 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scb7z\" (UniqueName: \"kubernetes.io/projected/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-kube-api-access-scb7z\") pod \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.124848 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-combined-ca-bundle\") pod \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.124871 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-scripts\") pod \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\" (UID: \"525ed4f2-f6a1-436b-8119-cf3fc620c6a7\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.134197 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-scripts" (OuterVolumeSpecName: "scripts") pod "525ed4f2-f6a1-436b-8119-cf3fc620c6a7" (UID: "525ed4f2-f6a1-436b-8119-cf3fc620c6a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.138242 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-kube-api-access-scb7z" (OuterVolumeSpecName: "kube-api-access-scb7z") pod "525ed4f2-f6a1-436b-8119-cf3fc620c6a7" (UID: "525ed4f2-f6a1-436b-8119-cf3fc620c6a7"). InnerVolumeSpecName "kube-api-access-scb7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.158139 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-config-data" (OuterVolumeSpecName: "config-data") pod "525ed4f2-f6a1-436b-8119-cf3fc620c6a7" (UID: "525ed4f2-f6a1-436b-8119-cf3fc620c6a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.170188 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "525ed4f2-f6a1-436b-8119-cf3fc620c6a7" (UID: "525ed4f2-f6a1-436b-8119-cf3fc620c6a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.227032 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.227060 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.227069 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.227079 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scb7z\" (UniqueName: \"kubernetes.io/projected/525ed4f2-f6a1-436b-8119-cf3fc620c6a7-kube-api-access-scb7z\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.269584 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.328052 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-sg-core-conf-yaml\") pod \"64564820-1434-4fa4-909a-7a9310f55f81\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.328114 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2qt4\" (UniqueName: \"kubernetes.io/projected/64564820-1434-4fa4-909a-7a9310f55f81-kube-api-access-g2qt4\") pod \"64564820-1434-4fa4-909a-7a9310f55f81\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.328168 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-run-httpd\") pod \"64564820-1434-4fa4-909a-7a9310f55f81\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.328293 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-combined-ca-bundle\") pod \"64564820-1434-4fa4-909a-7a9310f55f81\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.328326 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-scripts\") pod \"64564820-1434-4fa4-909a-7a9310f55f81\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.328357 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-log-httpd\") pod \"64564820-1434-4fa4-909a-7a9310f55f81\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.328396 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-config-data\") pod \"64564820-1434-4fa4-909a-7a9310f55f81\" (UID: \"64564820-1434-4fa4-909a-7a9310f55f81\") " Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.334320 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "64564820-1434-4fa4-909a-7a9310f55f81" (UID: "64564820-1434-4fa4-909a-7a9310f55f81"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.337049 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "64564820-1434-4fa4-909a-7a9310f55f81" (UID: "64564820-1434-4fa4-909a-7a9310f55f81"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.340995 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64564820-1434-4fa4-909a-7a9310f55f81-kube-api-access-g2qt4" (OuterVolumeSpecName: "kube-api-access-g2qt4") pod "64564820-1434-4fa4-909a-7a9310f55f81" (UID: "64564820-1434-4fa4-909a-7a9310f55f81"). InnerVolumeSpecName "kube-api-access-g2qt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.346517 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-scripts" (OuterVolumeSpecName: "scripts") pod "64564820-1434-4fa4-909a-7a9310f55f81" (UID: "64564820-1434-4fa4-909a-7a9310f55f81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.359576 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "64564820-1434-4fa4-909a-7a9310f55f81" (UID: "64564820-1434-4fa4-909a-7a9310f55f81"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.430733 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.431058 4637 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.431159 4637 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.431281 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2qt4\" (UniqueName: \"kubernetes.io/projected/64564820-1434-4fa4-909a-7a9310f55f81-kube-api-access-g2qt4\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.431351 4637 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64564820-1434-4fa4-909a-7a9310f55f81-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.439385 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64564820-1434-4fa4-909a-7a9310f55f81" (UID: "64564820-1434-4fa4-909a-7a9310f55f81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.440796 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-config-data" (OuterVolumeSpecName: "config-data") pod "64564820-1434-4fa4-909a-7a9310f55f81" (UID: "64564820-1434-4fa4-909a-7a9310f55f81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.533060 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.533105 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64564820-1434-4fa4-909a-7a9310f55f81-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.696996 4637 generic.go:334] "Generic (PLEG): container finished" podID="64564820-1434-4fa4-909a-7a9310f55f81" containerID="6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd" exitCode=0 Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.697038 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerDied","Data":"6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd"} Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.697090 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64564820-1434-4fa4-909a-7a9310f55f81","Type":"ContainerDied","Data":"d51e92a9169d8fdf5960abe828ddc1e7098be1f29dfded6cb83701e1234fb30d"} Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.697107 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.697111 4637 scope.go:117] "RemoveContainer" containerID="2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.700664 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.700696 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.701053 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.701224 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.701251 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.701114 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ddhlw" event={"ID":"525ed4f2-f6a1-436b-8119-cf3fc620c6a7","Type":"ContainerDied","Data":"ecb7a32c4a614e23a2f79d299044f7ed2974b17d5197c1e55b861d24a19a10d8"} Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.701297 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecb7a32c4a614e23a2f79d299044f7ed2974b17d5197c1e55b861d24a19a10d8" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.755398 4637 scope.go:117] "RemoveContainer" containerID="82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.771018 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.784789 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.807225 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:04 crc kubenswrapper[4637]: E1201 15:06:04.807971 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="ceilometer-notification-agent" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.807990 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="ceilometer-notification-agent" Dec 01 15:06:04 crc kubenswrapper[4637]: E1201 15:06:04.808004 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="ceilometer-central-agent" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808010 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="ceilometer-central-agent" Dec 01 15:06:04 crc kubenswrapper[4637]: E1201 15:06:04.808030 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525ed4f2-f6a1-436b-8119-cf3fc620c6a7" containerName="nova-cell0-conductor-db-sync" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808036 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="525ed4f2-f6a1-436b-8119-cf3fc620c6a7" containerName="nova-cell0-conductor-db-sync" Dec 01 15:06:04 crc kubenswrapper[4637]: E1201 15:06:04.808052 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="sg-core" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808065 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="sg-core" Dec 01 15:06:04 crc kubenswrapper[4637]: E1201 15:06:04.808077 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="proxy-httpd" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808085 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="proxy-httpd" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808303 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="proxy-httpd" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808313 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="sg-core" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808323 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="ceilometer-central-agent" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808334 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="64564820-1434-4fa4-909a-7a9310f55f81" containerName="ceilometer-notification-agent" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.808346 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="525ed4f2-f6a1-436b-8119-cf3fc620c6a7" containerName="nova-cell0-conductor-db-sync" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.811756 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.822564 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.845524 4637 scope.go:117] "RemoveContainer" containerID="91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.847977 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.850385 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.918518 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.921517 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.925259 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4dqh8" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.926984 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.946137 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.946245 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.946329 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-config-data\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.946415 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcp2d\" (UniqueName: \"kubernetes.io/projected/93d1bb45-b84a-46f0-8c3b-e906382a7020-kube-api-access-rcp2d\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.946439 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-scripts\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.946503 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-run-httpd\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.946525 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-log-httpd\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.964712 4637 scope.go:117] "RemoveContainer" containerID="6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd" Dec 01 15:06:04 crc kubenswrapper[4637]: I1201 15:06:04.985191 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049470 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q682d\" (UniqueName: \"kubernetes.io/projected/1ca73521-4fd3-4ff2-b490-7e52488a96e4-kube-api-access-q682d\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049550 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcp2d\" (UniqueName: \"kubernetes.io/projected/93d1bb45-b84a-46f0-8c3b-e906382a7020-kube-api-access-rcp2d\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049579 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca73521-4fd3-4ff2-b490-7e52488a96e4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049594 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-scripts\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049620 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca73521-4fd3-4ff2-b490-7e52488a96e4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049667 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-run-httpd\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049681 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-log-httpd\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049707 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049767 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.049806 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-config-data\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.058706 4637 scope.go:117] "RemoveContainer" containerID="2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.059095 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-log-httpd\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.059185 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-config-data\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.059276 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-run-httpd\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: E1201 15:06:05.061255 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502\": container with ID starting with 2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502 not found: ID does not exist" containerID="2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.061320 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502"} err="failed to get container status \"2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502\": rpc error: code = NotFound desc = could not find container \"2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502\": container with ID starting with 2758323ba28c621da97e7cc3a2b9ffe17c20454abc769c98846b14bd9f192502 not found: ID does not exist" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.061361 4637 scope.go:117] "RemoveContainer" containerID="82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f" Dec 01 15:06:05 crc kubenswrapper[4637]: E1201 15:06:05.061966 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f\": container with ID starting with 82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f not found: ID does not exist" containerID="82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.062076 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f"} err="failed to get container status \"82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f\": rpc error: code = NotFound desc = could not find container \"82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f\": container with ID starting with 82f8f87a6af6126d471b049c59d1c4615e7b4fc7d720dc618cee827cde563f7f not found: ID does not exist" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.062176 4637 scope.go:117] "RemoveContainer" containerID="91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09" Dec 01 15:06:05 crc kubenswrapper[4637]: E1201 15:06:05.062608 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09\": container with ID starting with 91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09 not found: ID does not exist" containerID="91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.062651 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09"} err="failed to get container status \"91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09\": rpc error: code = NotFound desc = could not find container \"91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09\": container with ID starting with 91ff229e95d4d3b9b14832698dbb180bf1ef132a811c0fcaac439d2ee31d4e09 not found: ID does not exist" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.062683 4637 scope.go:117] "RemoveContainer" containerID="6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd" Dec 01 15:06:05 crc kubenswrapper[4637]: E1201 15:06:05.063183 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd\": container with ID starting with 6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd not found: ID does not exist" containerID="6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.063274 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd"} err="failed to get container status \"6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd\": rpc error: code = NotFound desc = could not find container \"6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd\": container with ID starting with 6282fea901387b835f2bef126ec41dd931aa6e355f16a13834872d854b87f7dd not found: ID does not exist" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.063786 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.073096 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.075228 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcp2d\" (UniqueName: \"kubernetes.io/projected/93d1bb45-b84a-46f0-8c3b-e906382a7020-kube-api-access-rcp2d\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.089645 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-scripts\") pod \"ceilometer-0\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.151440 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q682d\" (UniqueName: \"kubernetes.io/projected/1ca73521-4fd3-4ff2-b490-7e52488a96e4-kube-api-access-q682d\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.152178 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca73521-4fd3-4ff2-b490-7e52488a96e4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.152254 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca73521-4fd3-4ff2-b490-7e52488a96e4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.156103 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.156501 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca73521-4fd3-4ff2-b490-7e52488a96e4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.167821 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca73521-4fd3-4ff2-b490-7e52488a96e4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.175671 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q682d\" (UniqueName: \"kubernetes.io/projected/1ca73521-4fd3-4ff2-b490-7e52488a96e4-kube-api-access-q682d\") pod \"nova-cell0-conductor-0\" (UID: \"1ca73521-4fd3-4ff2-b490-7e52488a96e4\") " pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.268345 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.556255 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.736172 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerStarted","Data":"8bb64f057debdddd8c8ceb564f67d2c51803f26852b2c3cf503ce912c7948c9d"} Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.781758 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64564820-1434-4fa4-909a-7a9310f55f81" path="/var/lib/kubelet/pods/64564820-1434-4fa4-909a-7a9310f55f81/volumes" Dec 01 15:06:05 crc kubenswrapper[4637]: I1201 15:06:05.835345 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 15:06:05 crc kubenswrapper[4637]: W1201 15:06:05.835360 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ca73521_4fd3_4ff2_b490_7e52488a96e4.slice/crio-187ee433d8eab22bd716361af4a9892968c3694da0a82a6d3fce86d135ce0ab9 WatchSource:0}: Error finding container 187ee433d8eab22bd716361af4a9892968c3694da0a82a6d3fce86d135ce0ab9: Status 404 returned error can't find the container with id 187ee433d8eab22bd716361af4a9892968c3694da0a82a6d3fce86d135ce0ab9 Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.336320 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.336922 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.422560 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.448393 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.448578 4637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.449184 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.755307 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerStarted","Data":"7e3b9ba18cb546a8aadb454296bf2fe279b01c336934492a27610ae0fb9a8251"} Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.758196 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1ca73521-4fd3-4ff2-b490-7e52488a96e4","Type":"ContainerStarted","Data":"360d17d35f2e86e3c6a7e016f86af12685338054e028074adc8fde276f37717b"} Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.758230 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1ca73521-4fd3-4ff2-b490-7e52488a96e4","Type":"ContainerStarted","Data":"187ee433d8eab22bd716361af4a9892968c3694da0a82a6d3fce86d135ce0ab9"} Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.758691 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:06 crc kubenswrapper[4637]: I1201 15:06:06.818005 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.817981503 podStartE2EDuration="2.817981503s" podCreationTimestamp="2025-12-01 15:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:06.809152844 +0000 UTC m=+1217.326861672" watchObservedRunningTime="2025-12-01 15:06:06.817981503 +0000 UTC m=+1217.335690331" Dec 01 15:06:07 crc kubenswrapper[4637]: I1201 15:06:07.769299 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerStarted","Data":"c00c8962aa1af7ce8804f4991e9f57221bcad536881988a1f51fce3d44b3e754"} Dec 01 15:06:08 crc kubenswrapper[4637]: I1201 15:06:08.782119 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerStarted","Data":"1f79e2f4ec8a872fd15662df143ab7d0e336d047974b02ded57bb7f5932355da"} Dec 01 15:06:10 crc kubenswrapper[4637]: I1201 15:06:10.800867 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerStarted","Data":"b93dd185467caff69ed6d745379f87d877abe036dd7e7af7c83962eb3c9128a7"} Dec 01 15:06:10 crc kubenswrapper[4637]: I1201 15:06:10.801608 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:06:10 crc kubenswrapper[4637]: I1201 15:06:10.834992 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.594164436 podStartE2EDuration="6.834971602s" podCreationTimestamp="2025-12-01 15:06:04 +0000 UTC" firstStartedPulling="2025-12-01 15:06:05.559147356 +0000 UTC m=+1216.076856184" lastFinishedPulling="2025-12-01 15:06:09.799954522 +0000 UTC m=+1220.317663350" observedRunningTime="2025-12-01 15:06:10.834492709 +0000 UTC m=+1221.352201537" watchObservedRunningTime="2025-12-01 15:06:10.834971602 +0000 UTC m=+1221.352680420" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.320221 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.865821 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-thknm"] Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.867277 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.870342 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.870392 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.880530 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-thknm"] Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.896553 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-scripts\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.896622 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-config-data\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.896758 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.896803 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z94nf\" (UniqueName: \"kubernetes.io/projected/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-kube-api-access-z94nf\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.998339 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.998833 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z94nf\" (UniqueName: \"kubernetes.io/projected/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-kube-api-access-z94nf\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.999133 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-scripts\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:15 crc kubenswrapper[4637]: I1201 15:06:15.999275 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-config-data\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.004643 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.023598 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-scripts\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.040036 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-config-data\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.046084 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z94nf\" (UniqueName: \"kubernetes.io/projected/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-kube-api-access-z94nf\") pod \"nova-cell0-cell-mapping-thknm\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.150156 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.155579 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.183444 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.191652 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.192782 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.198636 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.202779 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b73ad837-b207-4701-9444-13a324388bcc-logs\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.202872 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-config-data\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.202912 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.202992 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjjk\" (UniqueName: \"kubernetes.io/projected/b73ad837-b207-4701-9444-13a324388bcc-kube-api-access-jbjjk\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.203425 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.227082 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.240009 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.304848 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b73ad837-b207-4701-9444-13a324388bcc-logs\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.305130 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-config-data\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.305235 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.305365 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.305530 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjjk\" (UniqueName: \"kubernetes.io/projected/b73ad837-b207-4701-9444-13a324388bcc-kube-api-access-jbjjk\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.305647 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpvcg\" (UniqueName: \"kubernetes.io/projected/d1775b92-4bcb-4709-a299-5b116ed03f37-kube-api-access-bpvcg\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.305775 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.306357 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b73ad837-b207-4701-9444-13a324388bcc-logs\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.312075 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-config-data\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.312267 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.366979 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.368280 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.370806 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.377455 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjjk\" (UniqueName: \"kubernetes.io/projected/b73ad837-b207-4701-9444-13a324388bcc-kube-api-access-jbjjk\") pod \"nova-api-0\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.407167 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfzzq\" (UniqueName: \"kubernetes.io/projected/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-kube-api-access-sfzzq\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.407286 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.407325 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.407380 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-config-data\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.407414 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpvcg\" (UniqueName: \"kubernetes.io/projected/d1775b92-4bcb-4709-a299-5b116ed03f37-kube-api-access-bpvcg\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.407471 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.410002 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.423573 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.438571 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.449207 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.459892 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.463053 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpvcg\" (UniqueName: \"kubernetes.io/projected/d1775b92-4bcb-4709-a299-5b116ed03f37-kube-api-access-bpvcg\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.467157 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.475730 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.508916 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kswhd\" (UniqueName: \"kubernetes.io/projected/c3a6afdb-ff5e-4280-b516-fdb897da268f-kube-api-access-kswhd\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.509010 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-config-data\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.509054 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.509084 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.509108 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-config-data\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.509134 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a6afdb-ff5e-4280-b516-fdb897da268f-logs\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.509221 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfzzq\" (UniqueName: \"kubernetes.io/projected/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-kube-api-access-sfzzq\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.518033 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.522061 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.534998 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-config-data\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.546484 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.564622 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfzzq\" (UniqueName: \"kubernetes.io/projected/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-kube-api-access-sfzzq\") pod \"nova-scheduler-0\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.612083 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kswhd\" (UniqueName: \"kubernetes.io/projected/c3a6afdb-ff5e-4280-b516-fdb897da268f-kube-api-access-kswhd\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.621276 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-config-data\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.621468 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.621533 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a6afdb-ff5e-4280-b516-fdb897da268f-logs\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.622165 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a6afdb-ff5e-4280-b516-fdb897da268f-logs\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.629653 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.635676 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-gbnmb"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.637559 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.639812 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kswhd\" (UniqueName: \"kubernetes.io/projected/c3a6afdb-ff5e-4280-b516-fdb897da268f-kube-api-access-kswhd\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.643586 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-config-data\") pod \"nova-metadata-0\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.710743 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-gbnmb"] Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.812925 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.835051 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.835151 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.835194 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.835274 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-svc\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.835507 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-config\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.835585 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxr6r\" (UniqueName: \"kubernetes.io/projected/bca325c0-0fea-403c-8354-654b1c6167a3-kube-api-access-hxr6r\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.837697 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.945676 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxr6r\" (UniqueName: \"kubernetes.io/projected/bca325c0-0fea-403c-8354-654b1c6167a3-kube-api-access-hxr6r\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.945858 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.945888 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.945912 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.945986 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-svc\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.946112 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-config\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.947201 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-config\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.948801 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-svc\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.951387 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.955085 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.956170 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.975067 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxr6r\" (UniqueName: \"kubernetes.io/projected/bca325c0-0fea-403c-8354-654b1c6167a3-kube-api-access-hxr6r\") pod \"dnsmasq-dns-865f5d856f-gbnmb\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:16 crc kubenswrapper[4637]: I1201 15:06:16.997042 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.073634 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-thknm"] Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.325897 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.480542 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.531885 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tq4rf"] Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.533357 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.538737 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.539016 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.566616 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tq4rf"] Dec 01 15:06:17 crc kubenswrapper[4637]: W1201 15:06:17.645197 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83cf05d0_9d1b_4d7e_b5dd_7f16b82b73fd.slice/crio-2b8a142fb5b71d6daba2262e006befdc92ad0dee88239e541766a11a539b75be WatchSource:0}: Error finding container 2b8a142fb5b71d6daba2262e006befdc92ad0dee88239e541766a11a539b75be: Status 404 returned error can't find the container with id 2b8a142fb5b71d6daba2262e006befdc92ad0dee88239e541766a11a539b75be Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.650273 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.670412 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmrp\" (UniqueName: \"kubernetes.io/projected/71511747-8af4-48e6-8d8e-65c200a23c34-kube-api-access-bpmrp\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.670556 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-config-data\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.670577 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.670623 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-scripts\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.675295 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.773652 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmrp\" (UniqueName: \"kubernetes.io/projected/71511747-8af4-48e6-8d8e-65c200a23c34-kube-api-access-bpmrp\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.774119 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-config-data\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.774148 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.774167 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-scripts\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.783405 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.783535 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-scripts\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.785599 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-config-data\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.804969 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmrp\" (UniqueName: \"kubernetes.io/projected/71511747-8af4-48e6-8d8e-65c200a23c34-kube-api-access-bpmrp\") pod \"nova-cell1-conductor-db-sync-tq4rf\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.856427 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.870282 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-gbnmb"] Dec 01 15:06:17 crc kubenswrapper[4637]: W1201 15:06:17.921216 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbca325c0_0fea_403c_8354_654b1c6167a3.slice/crio-ed406c367aa48ef1479036f4d2b4abdbf152f9130a3da6ad1bcf3999e91b1826 WatchSource:0}: Error finding container ed406c367aa48ef1479036f4d2b4abdbf152f9130a3da6ad1bcf3999e91b1826: Status 404 returned error can't find the container with id ed406c367aa48ef1479036f4d2b4abdbf152f9130a3da6ad1bcf3999e91b1826 Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.926139 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd","Type":"ContainerStarted","Data":"2b8a142fb5b71d6daba2262e006befdc92ad0dee88239e541766a11a539b75be"} Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.929738 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3a6afdb-ff5e-4280-b516-fdb897da268f","Type":"ContainerStarted","Data":"664bdbd8de5db415278a1f57604218357deade67bf4a4d40963fca27eb0e225b"} Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.932748 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b73ad837-b207-4701-9444-13a324388bcc","Type":"ContainerStarted","Data":"720384660c7e50c208e635873f14cab054e9ad5a8629e0efe70870bd7e19a315"} Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.940252 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1775b92-4bcb-4709-a299-5b116ed03f37","Type":"ContainerStarted","Data":"621a7f4dc99257674455fc13b3b74a972a44c9a75584498536feee6996ee1323"} Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.949255 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-thknm" event={"ID":"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c","Type":"ContainerStarted","Data":"5acfaaeeaa2f69aedd23bdfe290b869c6e44324a651880de234633292d0fc90e"} Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.949350 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-thknm" event={"ID":"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c","Type":"ContainerStarted","Data":"002d43219643ca7f72d2f5a16d91b555942824914e04c459cfa8706b7d733f62"} Dec 01 15:06:17 crc kubenswrapper[4637]: I1201 15:06:17.996715 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-thknm" podStartSLOduration=2.996697245 podStartE2EDuration="2.996697245s" podCreationTimestamp="2025-12-01 15:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:17.9710296 +0000 UTC m=+1228.488738428" watchObservedRunningTime="2025-12-01 15:06:17.996697245 +0000 UTC m=+1228.514406073" Dec 01 15:06:18 crc kubenswrapper[4637]: I1201 15:06:18.339814 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tq4rf"] Dec 01 15:06:18 crc kubenswrapper[4637]: I1201 15:06:18.970373 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" event={"ID":"71511747-8af4-48e6-8d8e-65c200a23c34","Type":"ContainerStarted","Data":"1a169fbfbacbed0622d8ee77116e7c0c1c2b11f112cfa139eb69cd39052ce9f0"} Dec 01 15:06:18 crc kubenswrapper[4637]: I1201 15:06:18.970804 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" event={"ID":"71511747-8af4-48e6-8d8e-65c200a23c34","Type":"ContainerStarted","Data":"b53f5ee442b060f958341cbbf987b5b107b98fee66f5f6663177ce88bfbfb16d"} Dec 01 15:06:18 crc kubenswrapper[4637]: I1201 15:06:18.979735 4637 generic.go:334] "Generic (PLEG): container finished" podID="bca325c0-0fea-403c-8354-654b1c6167a3" containerID="255d24f251f2253fd42dcc3efa0c8a0922af793f95e693ce4c9c89061f18ce70" exitCode=0 Dec 01 15:06:18 crc kubenswrapper[4637]: I1201 15:06:18.980900 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" event={"ID":"bca325c0-0fea-403c-8354-654b1c6167a3","Type":"ContainerDied","Data":"255d24f251f2253fd42dcc3efa0c8a0922af793f95e693ce4c9c89061f18ce70"} Dec 01 15:06:18 crc kubenswrapper[4637]: I1201 15:06:18.980941 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" event={"ID":"bca325c0-0fea-403c-8354-654b1c6167a3","Type":"ContainerStarted","Data":"ed406c367aa48ef1479036f4d2b4abdbf152f9130a3da6ad1bcf3999e91b1826"} Dec 01 15:06:18 crc kubenswrapper[4637]: I1201 15:06:18.999156 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" podStartSLOduration=1.999120112 podStartE2EDuration="1.999120112s" podCreationTimestamp="2025-12-01 15:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:18.996172113 +0000 UTC m=+1229.513880941" watchObservedRunningTime="2025-12-01 15:06:18.999120112 +0000 UTC m=+1229.516828930" Dec 01 15:06:20 crc kubenswrapper[4637]: I1201 15:06:20.025445 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" event={"ID":"bca325c0-0fea-403c-8354-654b1c6167a3","Type":"ContainerStarted","Data":"db919e85951f0b5c744e820b05d43e85696840ac52d8cbfbeea5685c602a4d3d"} Dec 01 15:06:20 crc kubenswrapper[4637]: I1201 15:06:20.029609 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:20 crc kubenswrapper[4637]: I1201 15:06:20.074623 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" podStartSLOduration=4.074600667 podStartE2EDuration="4.074600667s" podCreationTimestamp="2025-12-01 15:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:20.066176889 +0000 UTC m=+1230.583885717" watchObservedRunningTime="2025-12-01 15:06:20.074600667 +0000 UTC m=+1230.592309495" Dec 01 15:06:20 crc kubenswrapper[4637]: I1201 15:06:20.355137 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:20 crc kubenswrapper[4637]: I1201 15:06:20.378208 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:22 crc kubenswrapper[4637]: I1201 15:06:22.067441 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3a6afdb-ff5e-4280-b516-fdb897da268f","Type":"ContainerStarted","Data":"cb655c74ff501d29e7160a5e02cbd9f14abccb75a96496b717fd4e5f1fc2004e"} Dec 01 15:06:22 crc kubenswrapper[4637]: I1201 15:06:22.068176 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3a6afdb-ff5e-4280-b516-fdb897da268f","Type":"ContainerStarted","Data":"78079b022401b03e285b0f0813807f55476f29097b8cd4c98c9e8d97cab0d1a9"} Dec 01 15:06:22 crc kubenswrapper[4637]: I1201 15:06:22.068207 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerName="nova-metadata-log" containerID="cri-o://78079b022401b03e285b0f0813807f55476f29097b8cd4c98c9e8d97cab0d1a9" gracePeriod=30 Dec 01 15:06:22 crc kubenswrapper[4637]: I1201 15:06:22.068349 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerName="nova-metadata-metadata" containerID="cri-o://cb655c74ff501d29e7160a5e02cbd9f14abccb75a96496b717fd4e5f1fc2004e" gracePeriod=30 Dec 01 15:06:22 crc kubenswrapper[4637]: I1201 15:06:22.072273 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b73ad837-b207-4701-9444-13a324388bcc","Type":"ContainerStarted","Data":"08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3"} Dec 01 15:06:22 crc kubenswrapper[4637]: I1201 15:06:22.077163 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd","Type":"ContainerStarted","Data":"65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8"} Dec 01 15:06:22 crc kubenswrapper[4637]: I1201 15:06:22.112324 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.361196928 podStartE2EDuration="6.112305062s" podCreationTimestamp="2025-12-01 15:06:16 +0000 UTC" firstStartedPulling="2025-12-01 15:06:17.682274906 +0000 UTC m=+1228.199983724" lastFinishedPulling="2025-12-01 15:06:21.43338303 +0000 UTC m=+1231.951091858" observedRunningTime="2025-12-01 15:06:22.088371254 +0000 UTC m=+1232.606080082" watchObservedRunningTime="2025-12-01 15:06:22.112305062 +0000 UTC m=+1232.630013890" Dec 01 15:06:22 crc kubenswrapper[4637]: I1201 15:06:22.127444 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.350849268 podStartE2EDuration="6.127430161s" podCreationTimestamp="2025-12-01 15:06:16 +0000 UTC" firstStartedPulling="2025-12-01 15:06:17.653154208 +0000 UTC m=+1228.170863036" lastFinishedPulling="2025-12-01 15:06:21.429735101 +0000 UTC m=+1231.947443929" observedRunningTime="2025-12-01 15:06:22.122256172 +0000 UTC m=+1232.639964990" watchObservedRunningTime="2025-12-01 15:06:22.127430161 +0000 UTC m=+1232.645138979" Dec 01 15:06:23 crc kubenswrapper[4637]: I1201 15:06:23.093338 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b73ad837-b207-4701-9444-13a324388bcc","Type":"ContainerStarted","Data":"485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1"} Dec 01 15:06:23 crc kubenswrapper[4637]: I1201 15:06:23.096435 4637 generic.go:334] "Generic (PLEG): container finished" podID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerID="78079b022401b03e285b0f0813807f55476f29097b8cd4c98c9e8d97cab0d1a9" exitCode=143 Dec 01 15:06:23 crc kubenswrapper[4637]: I1201 15:06:23.096858 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3a6afdb-ff5e-4280-b516-fdb897da268f","Type":"ContainerDied","Data":"78079b022401b03e285b0f0813807f55476f29097b8cd4c98c9e8d97cab0d1a9"} Dec 01 15:06:23 crc kubenswrapper[4637]: I1201 15:06:23.122243 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.065288953 podStartE2EDuration="7.122226423s" podCreationTimestamp="2025-12-01 15:06:16 +0000 UTC" firstStartedPulling="2025-12-01 15:06:17.3742507 +0000 UTC m=+1227.891959528" lastFinishedPulling="2025-12-01 15:06:21.43118817 +0000 UTC m=+1231.948896998" observedRunningTime="2025-12-01 15:06:23.117290829 +0000 UTC m=+1233.634999657" watchObservedRunningTime="2025-12-01 15:06:23.122226423 +0000 UTC m=+1233.639935251" Dec 01 15:06:24 crc kubenswrapper[4637]: I1201 15:06:24.110215 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d1775b92-4bcb-4709-a299-5b116ed03f37" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://775bf6508043d7a7e09c9c877927ad79135730eb5ed2584635ccba435544bf6e" gracePeriod=30 Dec 01 15:06:24 crc kubenswrapper[4637]: I1201 15:06:24.110718 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1775b92-4bcb-4709-a299-5b116ed03f37","Type":"ContainerStarted","Data":"775bf6508043d7a7e09c9c877927ad79135730eb5ed2584635ccba435544bf6e"} Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.477173 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.477792 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.519793 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.813655 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.814242 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.838313 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.838368 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.875135 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 15:06:26 crc kubenswrapper[4637]: I1201 15:06:26.907168 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.465074997 podStartE2EDuration="10.907149332s" podCreationTimestamp="2025-12-01 15:06:16 +0000 UTC" firstStartedPulling="2025-12-01 15:06:17.50397262 +0000 UTC m=+1228.021681448" lastFinishedPulling="2025-12-01 15:06:22.946046955 +0000 UTC m=+1233.463755783" observedRunningTime="2025-12-01 15:06:24.140305155 +0000 UTC m=+1234.658014013" watchObservedRunningTime="2025-12-01 15:06:26.907149332 +0000 UTC m=+1237.424858160" Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.009301 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.105585 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-589zr"] Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.107834 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" podUID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" containerName="dnsmasq-dns" containerID="cri-o://5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c" gracePeriod=10 Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.199850 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.564119 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.565050 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.893072 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.959915 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwdk4\" (UniqueName: \"kubernetes.io/projected/9b600fb5-a1de-4b48-93bb-bca6eaebca44-kube-api-access-fwdk4\") pod \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.960196 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-sb\") pod \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.960257 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-swift-storage-0\") pod \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.960297 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-svc\") pod \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.960337 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-nb\") pod \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.960379 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-config\") pod \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\" (UID: \"9b600fb5-a1de-4b48-93bb-bca6eaebca44\") " Dec 01 15:06:27 crc kubenswrapper[4637]: I1201 15:06:27.982611 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b600fb5-a1de-4b48-93bb-bca6eaebca44-kube-api-access-fwdk4" (OuterVolumeSpecName: "kube-api-access-fwdk4") pod "9b600fb5-a1de-4b48-93bb-bca6eaebca44" (UID: "9b600fb5-a1de-4b48-93bb-bca6eaebca44"). InnerVolumeSpecName "kube-api-access-fwdk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.068108 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b600fb5-a1de-4b48-93bb-bca6eaebca44" (UID: "9b600fb5-a1de-4b48-93bb-bca6eaebca44"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.080712 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwdk4\" (UniqueName: \"kubernetes.io/projected/9b600fb5-a1de-4b48-93bb-bca6eaebca44-kube-api-access-fwdk4\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.080753 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.090487 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-config" (OuterVolumeSpecName: "config") pod "9b600fb5-a1de-4b48-93bb-bca6eaebca44" (UID: "9b600fb5-a1de-4b48-93bb-bca6eaebca44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.101004 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b600fb5-a1de-4b48-93bb-bca6eaebca44" (UID: "9b600fb5-a1de-4b48-93bb-bca6eaebca44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.123613 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b600fb5-a1de-4b48-93bb-bca6eaebca44" (UID: "9b600fb5-a1de-4b48-93bb-bca6eaebca44"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.126499 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b600fb5-a1de-4b48-93bb-bca6eaebca44" (UID: "9b600fb5-a1de-4b48-93bb-bca6eaebca44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.155191 4637 generic.go:334] "Generic (PLEG): container finished" podID="9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" containerID="5acfaaeeaa2f69aedd23bdfe290b869c6e44324a651880de234633292d0fc90e" exitCode=0 Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.155295 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-thknm" event={"ID":"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c","Type":"ContainerDied","Data":"5acfaaeeaa2f69aedd23bdfe290b869c6e44324a651880de234633292d0fc90e"} Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.158424 4637 generic.go:334] "Generic (PLEG): container finished" podID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" containerID="5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c" exitCode=0 Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.158482 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" event={"ID":"9b600fb5-a1de-4b48-93bb-bca6eaebca44","Type":"ContainerDied","Data":"5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c"} Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.158507 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" event={"ID":"9b600fb5-a1de-4b48-93bb-bca6eaebca44","Type":"ContainerDied","Data":"0505c08b1cabfb53710f20f1156f9fa05844f597fedc236594af056c769143c6"} Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.158527 4637 scope.go:117] "RemoveContainer" containerID="5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.158647 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-589zr" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.163713 4637 generic.go:334] "Generic (PLEG): container finished" podID="71511747-8af4-48e6-8d8e-65c200a23c34" containerID="1a169fbfbacbed0622d8ee77116e7c0c1c2b11f112cfa139eb69cd39052ce9f0" exitCode=0 Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.163883 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" event={"ID":"71511747-8af4-48e6-8d8e-65c200a23c34","Type":"ContainerDied","Data":"1a169fbfbacbed0622d8ee77116e7c0c1c2b11f112cfa139eb69cd39052ce9f0"} Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.182823 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.182863 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.182873 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.182883 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b600fb5-a1de-4b48-93bb-bca6eaebca44-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.195961 4637 scope.go:117] "RemoveContainer" containerID="6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.231207 4637 scope.go:117] "RemoveContainer" containerID="5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c" Dec 01 15:06:28 crc kubenswrapper[4637]: E1201 15:06:28.232098 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c\": container with ID starting with 5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c not found: ID does not exist" containerID="5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.232147 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c"} err="failed to get container status \"5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c\": rpc error: code = NotFound desc = could not find container \"5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c\": container with ID starting with 5c82d0768ae56e4e0f5d0dcd53e25e37d02bf3eeb64d3d3778b2d9b5125e155c not found: ID does not exist" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.232175 4637 scope.go:117] "RemoveContainer" containerID="6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd" Dec 01 15:06:28 crc kubenswrapper[4637]: E1201 15:06:28.232486 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd\": container with ID starting with 6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd not found: ID does not exist" containerID="6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.232515 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd"} err="failed to get container status \"6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd\": rpc error: code = NotFound desc = could not find container \"6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd\": container with ID starting with 6f377f9ced5743cba09fe236f09f4deeace29cbb988184d7e314ed7e4cf595cd not found: ID does not exist" Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.240855 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-589zr"] Dec 01 15:06:28 crc kubenswrapper[4637]: I1201 15:06:28.251983 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-589zr"] Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.623188 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.635053 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.717802 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpmrp\" (UniqueName: \"kubernetes.io/projected/71511747-8af4-48e6-8d8e-65c200a23c34-kube-api-access-bpmrp\") pod \"71511747-8af4-48e6-8d8e-65c200a23c34\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.718388 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-combined-ca-bundle\") pod \"71511747-8af4-48e6-8d8e-65c200a23c34\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.718542 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-scripts\") pod \"71511747-8af4-48e6-8d8e-65c200a23c34\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.718632 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-scripts\") pod \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.718670 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z94nf\" (UniqueName: \"kubernetes.io/projected/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-kube-api-access-z94nf\") pod \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.719885 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-config-data\") pod \"71511747-8af4-48e6-8d8e-65c200a23c34\" (UID: \"71511747-8af4-48e6-8d8e-65c200a23c34\") " Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.719974 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-combined-ca-bundle\") pod \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.720042 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-config-data\") pod \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\" (UID: \"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c\") " Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.726888 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-scripts" (OuterVolumeSpecName: "scripts") pod "9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" (UID: "9abff6ee-1b19-4555-9ba5-2f522f7b3d2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.733218 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-scripts" (OuterVolumeSpecName: "scripts") pod "71511747-8af4-48e6-8d8e-65c200a23c34" (UID: "71511747-8af4-48e6-8d8e-65c200a23c34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.733630 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71511747-8af4-48e6-8d8e-65c200a23c34-kube-api-access-bpmrp" (OuterVolumeSpecName: "kube-api-access-bpmrp") pod "71511747-8af4-48e6-8d8e-65c200a23c34" (UID: "71511747-8af4-48e6-8d8e-65c200a23c34"). InnerVolumeSpecName "kube-api-access-bpmrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.733803 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-kube-api-access-z94nf" (OuterVolumeSpecName: "kube-api-access-z94nf") pod "9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" (UID: "9abff6ee-1b19-4555-9ba5-2f522f7b3d2c"). InnerVolumeSpecName "kube-api-access-z94nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.756221 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-config-data" (OuterVolumeSpecName: "config-data") pod "71511747-8af4-48e6-8d8e-65c200a23c34" (UID: "71511747-8af4-48e6-8d8e-65c200a23c34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.762077 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71511747-8af4-48e6-8d8e-65c200a23c34" (UID: "71511747-8af4-48e6-8d8e-65c200a23c34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.786855 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" path="/var/lib/kubelet/pods/9b600fb5-a1de-4b48-93bb-bca6eaebca44/volumes" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.791702 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-config-data" (OuterVolumeSpecName: "config-data") pod "9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" (UID: "9abff6ee-1b19-4555-9ba5-2f522f7b3d2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.796776 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" (UID: "9abff6ee-1b19-4555-9ba5-2f522f7b3d2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.823593 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z94nf\" (UniqueName: \"kubernetes.io/projected/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-kube-api-access-z94nf\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.823641 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.823653 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.823663 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.823673 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpmrp\" (UniqueName: \"kubernetes.io/projected/71511747-8af4-48e6-8d8e-65c200a23c34-kube-api-access-bpmrp\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.823683 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.823691 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71511747-8af4-48e6-8d8e-65c200a23c34-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:29 crc kubenswrapper[4637]: I1201 15:06:29.823701 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.193450 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" event={"ID":"71511747-8af4-48e6-8d8e-65c200a23c34","Type":"ContainerDied","Data":"b53f5ee442b060f958341cbbf987b5b107b98fee66f5f6663177ce88bfbfb16d"} Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.193510 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53f5ee442b060f958341cbbf987b5b107b98fee66f5f6663177ce88bfbfb16d" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.194083 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tq4rf" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.196684 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-thknm" event={"ID":"9abff6ee-1b19-4555-9ba5-2f522f7b3d2c","Type":"ContainerDied","Data":"002d43219643ca7f72d2f5a16d91b555942824914e04c459cfa8706b7d733f62"} Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.196729 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="002d43219643ca7f72d2f5a16d91b555942824914e04c459cfa8706b7d733f62" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.196839 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-thknm" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.322333 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 15:06:30 crc kubenswrapper[4637]: E1201 15:06:30.324225 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" containerName="init" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.324253 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" containerName="init" Dec 01 15:06:30 crc kubenswrapper[4637]: E1201 15:06:30.324299 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71511747-8af4-48e6-8d8e-65c200a23c34" containerName="nova-cell1-conductor-db-sync" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.324309 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="71511747-8af4-48e6-8d8e-65c200a23c34" containerName="nova-cell1-conductor-db-sync" Dec 01 15:06:30 crc kubenswrapper[4637]: E1201 15:06:30.324393 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" containerName="nova-manage" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.324407 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" containerName="nova-manage" Dec 01 15:06:30 crc kubenswrapper[4637]: E1201 15:06:30.324430 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" containerName="dnsmasq-dns" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.324438 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" containerName="dnsmasq-dns" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.324906 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" containerName="nova-manage" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.324961 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b600fb5-a1de-4b48-93bb-bca6eaebca44" containerName="dnsmasq-dns" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.324989 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="71511747-8af4-48e6-8d8e-65c200a23c34" containerName="nova-cell1-conductor-db-sync" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.326957 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.333800 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.348320 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.446660 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47cc23aa-328f-4714-8407-cb7e62fa05db-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.446729 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ld5\" (UniqueName: \"kubernetes.io/projected/47cc23aa-328f-4714-8407-cb7e62fa05db-kube-api-access-p6ld5\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.446753 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47cc23aa-328f-4714-8407-cb7e62fa05db-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.469673 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.469971 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-log" containerID="cri-o://08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3" gracePeriod=30 Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.470481 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-api" containerID="cri-o://485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1" gracePeriod=30 Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.499441 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.499678 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" containerName="nova-scheduler-scheduler" containerID="cri-o://65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8" gracePeriod=30 Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.549285 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47cc23aa-328f-4714-8407-cb7e62fa05db-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.549351 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6ld5\" (UniqueName: \"kubernetes.io/projected/47cc23aa-328f-4714-8407-cb7e62fa05db-kube-api-access-p6ld5\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.549373 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47cc23aa-328f-4714-8407-cb7e62fa05db-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.555661 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47cc23aa-328f-4714-8407-cb7e62fa05db-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.559683 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47cc23aa-328f-4714-8407-cb7e62fa05db-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.571726 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6ld5\" (UniqueName: \"kubernetes.io/projected/47cc23aa-328f-4714-8407-cb7e62fa05db-kube-api-access-p6ld5\") pod \"nova-cell1-conductor-0\" (UID: \"47cc23aa-328f-4714-8407-cb7e62fa05db\") " pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:30 crc kubenswrapper[4637]: I1201 15:06:30.647183 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:31 crc kubenswrapper[4637]: I1201 15:06:31.131949 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 15:06:31 crc kubenswrapper[4637]: I1201 15:06:31.222709 4637 generic.go:334] "Generic (PLEG): container finished" podID="b73ad837-b207-4701-9444-13a324388bcc" containerID="08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3" exitCode=143 Dec 01 15:06:31 crc kubenswrapper[4637]: I1201 15:06:31.222796 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b73ad837-b207-4701-9444-13a324388bcc","Type":"ContainerDied","Data":"08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3"} Dec 01 15:06:31 crc kubenswrapper[4637]: I1201 15:06:31.224197 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"47cc23aa-328f-4714-8407-cb7e62fa05db","Type":"ContainerStarted","Data":"813dbb991eacdcdad4405c42912b893421d10325a151080b6598dabeec6b0bc9"} Dec 01 15:06:31 crc kubenswrapper[4637]: E1201 15:06:31.815863 4637 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 15:06:31 crc kubenswrapper[4637]: E1201 15:06:31.817518 4637 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 15:06:31 crc kubenswrapper[4637]: E1201 15:06:31.818978 4637 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 15:06:31 crc kubenswrapper[4637]: E1201 15:06:31.819063 4637 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" containerName="nova-scheduler-scheduler" Dec 01 15:06:32 crc kubenswrapper[4637]: I1201 15:06:32.248476 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"47cc23aa-328f-4714-8407-cb7e62fa05db","Type":"ContainerStarted","Data":"c2442ec626bd9521e864d760b70d7296249f7cefd4ced9993f7db0f65738eb99"} Dec 01 15:06:32 crc kubenswrapper[4637]: I1201 15:06:32.248921 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:32 crc kubenswrapper[4637]: I1201 15:06:32.271961 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.271943786 podStartE2EDuration="2.271943786s" podCreationTimestamp="2025-12-01 15:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:32.264393451 +0000 UTC m=+1242.782102279" watchObservedRunningTime="2025-12-01 15:06:32.271943786 +0000 UTC m=+1242.789652614" Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.773452 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.809978 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfzzq\" (UniqueName: \"kubernetes.io/projected/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-kube-api-access-sfzzq\") pod \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.810119 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-config-data\") pod \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.810200 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-combined-ca-bundle\") pod \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\" (UID: \"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd\") " Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.837294 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-kube-api-access-sfzzq" (OuterVolumeSpecName: "kube-api-access-sfzzq") pod "83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" (UID: "83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd"). InnerVolumeSpecName "kube-api-access-sfzzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.870167 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-config-data" (OuterVolumeSpecName: "config-data") pod "83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" (UID: "83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.880885 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" (UID: "83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.912419 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfzzq\" (UniqueName: \"kubernetes.io/projected/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-kube-api-access-sfzzq\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.912456 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.912467 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:33 crc kubenswrapper[4637]: I1201 15:06:33.972517 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.013640 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjjk\" (UniqueName: \"kubernetes.io/projected/b73ad837-b207-4701-9444-13a324388bcc-kube-api-access-jbjjk\") pod \"b73ad837-b207-4701-9444-13a324388bcc\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.013687 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-config-data\") pod \"b73ad837-b207-4701-9444-13a324388bcc\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.013744 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b73ad837-b207-4701-9444-13a324388bcc-logs\") pod \"b73ad837-b207-4701-9444-13a324388bcc\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.013800 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-combined-ca-bundle\") pod \"b73ad837-b207-4701-9444-13a324388bcc\" (UID: \"b73ad837-b207-4701-9444-13a324388bcc\") " Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.014222 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b73ad837-b207-4701-9444-13a324388bcc-logs" (OuterVolumeSpecName: "logs") pod "b73ad837-b207-4701-9444-13a324388bcc" (UID: "b73ad837-b207-4701-9444-13a324388bcc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.025212 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73ad837-b207-4701-9444-13a324388bcc-kube-api-access-jbjjk" (OuterVolumeSpecName: "kube-api-access-jbjjk") pod "b73ad837-b207-4701-9444-13a324388bcc" (UID: "b73ad837-b207-4701-9444-13a324388bcc"). InnerVolumeSpecName "kube-api-access-jbjjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.047168 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b73ad837-b207-4701-9444-13a324388bcc" (UID: "b73ad837-b207-4701-9444-13a324388bcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.056007 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-config-data" (OuterVolumeSpecName: "config-data") pod "b73ad837-b207-4701-9444-13a324388bcc" (UID: "b73ad837-b207-4701-9444-13a324388bcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.115732 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b73ad837-b207-4701-9444-13a324388bcc-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.115770 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.115786 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjjk\" (UniqueName: \"kubernetes.io/projected/b73ad837-b207-4701-9444-13a324388bcc-kube-api-access-jbjjk\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.115799 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73ad837-b207-4701-9444-13a324388bcc-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.270400 4637 generic.go:334] "Generic (PLEG): container finished" podID="b73ad837-b207-4701-9444-13a324388bcc" containerID="485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1" exitCode=0 Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.270458 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.270488 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b73ad837-b207-4701-9444-13a324388bcc","Type":"ContainerDied","Data":"485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1"} Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.270522 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b73ad837-b207-4701-9444-13a324388bcc","Type":"ContainerDied","Data":"720384660c7e50c208e635873f14cab054e9ad5a8629e0efe70870bd7e19a315"} Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.270547 4637 scope.go:117] "RemoveContainer" containerID="485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.273710 4637 generic.go:334] "Generic (PLEG): container finished" podID="83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" containerID="65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8" exitCode=0 Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.273745 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd","Type":"ContainerDied","Data":"65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8"} Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.273754 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.273767 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd","Type":"ContainerDied","Data":"2b8a142fb5b71d6daba2262e006befdc92ad0dee88239e541766a11a539b75be"} Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.303950 4637 scope.go:117] "RemoveContainer" containerID="08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.335968 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.339627 4637 scope.go:117] "RemoveContainer" containerID="485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1" Dec 01 15:06:34 crc kubenswrapper[4637]: E1201 15:06:34.340189 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1\": container with ID starting with 485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1 not found: ID does not exist" containerID="485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.340213 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1"} err="failed to get container status \"485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1\": rpc error: code = NotFound desc = could not find container \"485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1\": container with ID starting with 485ad343a6bde29aaba85f0c777b7285adf149e912c8da7c19efbed2a385abf1 not found: ID does not exist" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.340232 4637 scope.go:117] "RemoveContainer" containerID="08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3" Dec 01 15:06:34 crc kubenswrapper[4637]: E1201 15:06:34.342859 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3\": container with ID starting with 08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3 not found: ID does not exist" containerID="08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.342889 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3"} err="failed to get container status \"08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3\": rpc error: code = NotFound desc = could not find container \"08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3\": container with ID starting with 08f90a21413bb38296f6c0b0ebdd518b59edf43d85273fdaa8f8489202b605d3 not found: ID does not exist" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.342904 4637 scope.go:117] "RemoveContainer" containerID="65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.358001 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.371009 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.388123 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:34 crc kubenswrapper[4637]: E1201 15:06:34.388695 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-log" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.388723 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-log" Dec 01 15:06:34 crc kubenswrapper[4637]: E1201 15:06:34.388741 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" containerName="nova-scheduler-scheduler" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.388752 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" containerName="nova-scheduler-scheduler" Dec 01 15:06:34 crc kubenswrapper[4637]: E1201 15:06:34.388772 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-api" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.388780 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-api" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.389128 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-api" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.389154 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73ad837-b207-4701-9444-13a324388bcc" containerName="nova-api-log" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.389167 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" containerName="nova-scheduler-scheduler" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.390532 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.395993 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.397015 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.407989 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.422263 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.423591 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.427203 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.431421 4637 scope.go:117] "RemoveContainer" containerID="65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.431644 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:34 crc kubenswrapper[4637]: E1201 15:06:34.437642 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8\": container with ID starting with 65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8 not found: ID does not exist" containerID="65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.437678 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8"} err="failed to get container status \"65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8\": rpc error: code = NotFound desc = could not find container \"65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8\": container with ID starting with 65d4c3a12bb741a5379e1b2c4363d1db6c3baa0bee92acb5a77d894761aa9ae8 not found: ID does not exist" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.525668 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.526070 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.526098 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-config-data\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.526120 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl4rj\" (UniqueName: \"kubernetes.io/projected/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-kube-api-access-zl4rj\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.526150 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65cac97b-b9c2-4125-8d93-2d4034cf68a5-logs\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.526192 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-config-data\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.526311 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcvnd\" (UniqueName: \"kubernetes.io/projected/65cac97b-b9c2-4125-8d93-2d4034cf68a5-kube-api-access-kcvnd\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.628456 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.628512 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.628530 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-config-data\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.628552 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl4rj\" (UniqueName: \"kubernetes.io/projected/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-kube-api-access-zl4rj\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.628582 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65cac97b-b9c2-4125-8d93-2d4034cf68a5-logs\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.628634 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-config-data\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.628692 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcvnd\" (UniqueName: \"kubernetes.io/projected/65cac97b-b9c2-4125-8d93-2d4034cf68a5-kube-api-access-kcvnd\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.629516 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65cac97b-b9c2-4125-8d93-2d4034cf68a5-logs\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.633490 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-config-data\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.634560 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-config-data\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.643681 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.649967 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.650225 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcvnd\" (UniqueName: \"kubernetes.io/projected/65cac97b-b9c2-4125-8d93-2d4034cf68a5-kube-api-access-kcvnd\") pod \"nova-api-0\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.653827 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl4rj\" (UniqueName: \"kubernetes.io/projected/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-kube-api-access-zl4rj\") pod \"nova-scheduler-0\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " pod="openstack/nova-scheduler-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.738625 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:06:34 crc kubenswrapper[4637]: I1201 15:06:34.754569 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:06:35 crc kubenswrapper[4637]: I1201 15:06:35.163325 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 15:06:35 crc kubenswrapper[4637]: I1201 15:06:35.347739 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:35 crc kubenswrapper[4637]: I1201 15:06:35.476862 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:06:35 crc kubenswrapper[4637]: W1201 15:06:35.481203 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee3781ed_57e4_4baa_9dd6_872ac9d0f6dc.slice/crio-172baf1862ede075db71b5809407b5a95b5afcd2910e57a696dad61062943540 WatchSource:0}: Error finding container 172baf1862ede075db71b5809407b5a95b5afcd2910e57a696dad61062943540: Status 404 returned error can't find the container with id 172baf1862ede075db71b5809407b5a95b5afcd2910e57a696dad61062943540 Dec 01 15:06:35 crc kubenswrapper[4637]: I1201 15:06:35.783459 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd" path="/var/lib/kubelet/pods/83cf05d0-9d1b-4d7e-b5dd-7f16b82b73fd/volumes" Dec 01 15:06:35 crc kubenswrapper[4637]: I1201 15:06:35.784034 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73ad837-b207-4701-9444-13a324388bcc" path="/var/lib/kubelet/pods/b73ad837-b207-4701-9444-13a324388bcc/volumes" Dec 01 15:06:36 crc kubenswrapper[4637]: I1201 15:06:36.321322 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65cac97b-b9c2-4125-8d93-2d4034cf68a5","Type":"ContainerStarted","Data":"28aea11165bcbc933d3ff0801bf06247c73f06a405b2f4a18be5a22eb2a152d2"} Dec 01 15:06:36 crc kubenswrapper[4637]: I1201 15:06:36.321730 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65cac97b-b9c2-4125-8d93-2d4034cf68a5","Type":"ContainerStarted","Data":"b0e4c925d06c8880cfbbfcc323339edab1efcf3f6447b32187bee2a2ed963e01"} Dec 01 15:06:36 crc kubenswrapper[4637]: I1201 15:06:36.321748 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65cac97b-b9c2-4125-8d93-2d4034cf68a5","Type":"ContainerStarted","Data":"4e10708cc44cde1f11b2a05f4dda4d2c05c58e209f21f2bbb5a619c2a432300a"} Dec 01 15:06:36 crc kubenswrapper[4637]: I1201 15:06:36.323748 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc","Type":"ContainerStarted","Data":"404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc"} Dec 01 15:06:36 crc kubenswrapper[4637]: I1201 15:06:36.323794 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc","Type":"ContainerStarted","Data":"172baf1862ede075db71b5809407b5a95b5afcd2910e57a696dad61062943540"} Dec 01 15:06:36 crc kubenswrapper[4637]: I1201 15:06:36.350540 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.350511991 podStartE2EDuration="2.350511991s" podCreationTimestamp="2025-12-01 15:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:36.342786122 +0000 UTC m=+1246.860494960" watchObservedRunningTime="2025-12-01 15:06:36.350511991 +0000 UTC m=+1246.868220819" Dec 01 15:06:36 crc kubenswrapper[4637]: I1201 15:06:36.369047 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.369028152 podStartE2EDuration="2.369028152s" podCreationTimestamp="2025-12-01 15:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:36.360564103 +0000 UTC m=+1246.878272941" watchObservedRunningTime="2025-12-01 15:06:36.369028152 +0000 UTC m=+1246.886736980" Dec 01 15:06:38 crc kubenswrapper[4637]: I1201 15:06:38.873735 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:06:38 crc kubenswrapper[4637]: I1201 15:06:38.874415 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d2a7716a-d336-4d98-97c6-32b6c8b18e28" containerName="kube-state-metrics" containerID="cri-o://c01e34ee19b6ccfceeeb1bb9b60609966c84c1124dac3600bb258bdfb2b8cfd3" gracePeriod=30 Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.351615 4637 generic.go:334] "Generic (PLEG): container finished" podID="d2a7716a-d336-4d98-97c6-32b6c8b18e28" containerID="c01e34ee19b6ccfceeeb1bb9b60609966c84c1124dac3600bb258bdfb2b8cfd3" exitCode=2 Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.351725 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2a7716a-d336-4d98-97c6-32b6c8b18e28","Type":"ContainerDied","Data":"c01e34ee19b6ccfceeeb1bb9b60609966c84c1124dac3600bb258bdfb2b8cfd3"} Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.351992 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2a7716a-d336-4d98-97c6-32b6c8b18e28","Type":"ContainerDied","Data":"5e211dae9c4e9c11d896850e549961bec81e1e8eb5f023a60f714f6de37ef4a9"} Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.352013 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e211dae9c4e9c11d896850e549961bec81e1e8eb5f023a60f714f6de37ef4a9" Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.359278 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.428128 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwxk6\" (UniqueName: \"kubernetes.io/projected/d2a7716a-d336-4d98-97c6-32b6c8b18e28-kube-api-access-rwxk6\") pod \"d2a7716a-d336-4d98-97c6-32b6c8b18e28\" (UID: \"d2a7716a-d336-4d98-97c6-32b6c8b18e28\") " Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.447199 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a7716a-d336-4d98-97c6-32b6c8b18e28-kube-api-access-rwxk6" (OuterVolumeSpecName: "kube-api-access-rwxk6") pod "d2a7716a-d336-4d98-97c6-32b6c8b18e28" (UID: "d2a7716a-d336-4d98-97c6-32b6c8b18e28"). InnerVolumeSpecName "kube-api-access-rwxk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.530145 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwxk6\" (UniqueName: \"kubernetes.io/projected/d2a7716a-d336-4d98-97c6-32b6c8b18e28-kube-api-access-rwxk6\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:39 crc kubenswrapper[4637]: I1201 15:06:39.755747 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.360425 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.385639 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.395233 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.408821 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:06:40 crc kubenswrapper[4637]: E1201 15:06:40.409368 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a7716a-d336-4d98-97c6-32b6c8b18e28" containerName="kube-state-metrics" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.409397 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a7716a-d336-4d98-97c6-32b6c8b18e28" containerName="kube-state-metrics" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.409578 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a7716a-d336-4d98-97c6-32b6c8b18e28" containerName="kube-state-metrics" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.410363 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.419754 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.420124 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.420223 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.449175 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.449221 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.449241 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgqhd\" (UniqueName: \"kubernetes.io/projected/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-api-access-vgqhd\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.449266 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.551430 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.551729 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.551814 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgqhd\" (UniqueName: \"kubernetes.io/projected/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-api-access-vgqhd\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.551897 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.556819 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.556924 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.557589 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0062383a-47a6-4c14-bfeb-0ea63ac93305-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.570568 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgqhd\" (UniqueName: \"kubernetes.io/projected/0062383a-47a6-4c14-bfeb-0ea63ac93305-kube-api-access-vgqhd\") pod \"kube-state-metrics-0\" (UID: \"0062383a-47a6-4c14-bfeb-0ea63ac93305\") " pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.678138 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.734050 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.750430 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.750777 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="ceilometer-central-agent" containerID="cri-o://7e3b9ba18cb546a8aadb454296bf2fe279b01c336934492a27610ae0fb9a8251" gracePeriod=30 Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.750843 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="proxy-httpd" containerID="cri-o://b93dd185467caff69ed6d745379f87d877abe036dd7e7af7c83962eb3c9128a7" gracePeriod=30 Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.750918 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="sg-core" containerID="cri-o://1f79e2f4ec8a872fd15662df143ab7d0e336d047974b02ded57bb7f5932355da" gracePeriod=30 Dec 01 15:06:40 crc kubenswrapper[4637]: I1201 15:06:40.750988 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="ceilometer-notification-agent" containerID="cri-o://c00c8962aa1af7ce8804f4991e9f57221bcad536881988a1f51fce3d44b3e754" gracePeriod=30 Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.263679 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.275547 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.371759 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0062383a-47a6-4c14-bfeb-0ea63ac93305","Type":"ContainerStarted","Data":"0f18a00d4b94fc26bbf209740c5e09cf7402c0137d9dc552491317533823c25c"} Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.374982 4637 generic.go:334] "Generic (PLEG): container finished" podID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerID="b93dd185467caff69ed6d745379f87d877abe036dd7e7af7c83962eb3c9128a7" exitCode=0 Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.375008 4637 generic.go:334] "Generic (PLEG): container finished" podID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerID="1f79e2f4ec8a872fd15662df143ab7d0e336d047974b02ded57bb7f5932355da" exitCode=2 Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.375019 4637 generic.go:334] "Generic (PLEG): container finished" podID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerID="7e3b9ba18cb546a8aadb454296bf2fe279b01c336934492a27610ae0fb9a8251" exitCode=0 Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.375037 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerDied","Data":"b93dd185467caff69ed6d745379f87d877abe036dd7e7af7c83962eb3c9128a7"} Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.375060 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerDied","Data":"1f79e2f4ec8a872fd15662df143ab7d0e336d047974b02ded57bb7f5932355da"} Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.375072 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerDied","Data":"7e3b9ba18cb546a8aadb454296bf2fe279b01c336934492a27610ae0fb9a8251"} Dec 01 15:06:41 crc kubenswrapper[4637]: I1201 15:06:41.782226 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a7716a-d336-4d98-97c6-32b6c8b18e28" path="/var/lib/kubelet/pods/d2a7716a-d336-4d98-97c6-32b6c8b18e28/volumes" Dec 01 15:06:43 crc kubenswrapper[4637]: I1201 15:06:43.403216 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0062383a-47a6-4c14-bfeb-0ea63ac93305","Type":"ContainerStarted","Data":"f50e63ea028331d55041e636d9ad47a774931bcc5aa2eb7e65b8b5aede891c27"} Dec 01 15:06:43 crc kubenswrapper[4637]: I1201 15:06:43.405329 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 15:06:43 crc kubenswrapper[4637]: I1201 15:06:43.447697 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.5774653770000002 podStartE2EDuration="3.447676986s" podCreationTimestamp="2025-12-01 15:06:40 +0000 UTC" firstStartedPulling="2025-12-01 15:06:41.275341038 +0000 UTC m=+1251.793049866" lastFinishedPulling="2025-12-01 15:06:42.145552647 +0000 UTC m=+1252.663261475" observedRunningTime="2025-12-01 15:06:43.440673646 +0000 UTC m=+1253.958382474" watchObservedRunningTime="2025-12-01 15:06:43.447676986 +0000 UTC m=+1253.965385814" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.427622 4637 generic.go:334] "Generic (PLEG): container finished" podID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerID="c00c8962aa1af7ce8804f4991e9f57221bcad536881988a1f51fce3d44b3e754" exitCode=0 Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.427701 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerDied","Data":"c00c8962aa1af7ce8804f4991e9f57221bcad536881988a1f51fce3d44b3e754"} Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.555210 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.646544 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-config-data\") pod \"93d1bb45-b84a-46f0-8c3b-e906382a7020\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.646672 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-combined-ca-bundle\") pod \"93d1bb45-b84a-46f0-8c3b-e906382a7020\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.647578 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-run-httpd\") pod \"93d1bb45-b84a-46f0-8c3b-e906382a7020\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.647651 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcp2d\" (UniqueName: \"kubernetes.io/projected/93d1bb45-b84a-46f0-8c3b-e906382a7020-kube-api-access-rcp2d\") pod \"93d1bb45-b84a-46f0-8c3b-e906382a7020\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.647675 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-scripts\") pod \"93d1bb45-b84a-46f0-8c3b-e906382a7020\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.647706 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-log-httpd\") pod \"93d1bb45-b84a-46f0-8c3b-e906382a7020\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.647761 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-sg-core-conf-yaml\") pod \"93d1bb45-b84a-46f0-8c3b-e906382a7020\" (UID: \"93d1bb45-b84a-46f0-8c3b-e906382a7020\") " Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.651868 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "93d1bb45-b84a-46f0-8c3b-e906382a7020" (UID: "93d1bb45-b84a-46f0-8c3b-e906382a7020"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.651991 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "93d1bb45-b84a-46f0-8c3b-e906382a7020" (UID: "93d1bb45-b84a-46f0-8c3b-e906382a7020"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.654405 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-scripts" (OuterVolumeSpecName: "scripts") pod "93d1bb45-b84a-46f0-8c3b-e906382a7020" (UID: "93d1bb45-b84a-46f0-8c3b-e906382a7020"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.670651 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d1bb45-b84a-46f0-8c3b-e906382a7020-kube-api-access-rcp2d" (OuterVolumeSpecName: "kube-api-access-rcp2d") pod "93d1bb45-b84a-46f0-8c3b-e906382a7020" (UID: "93d1bb45-b84a-46f0-8c3b-e906382a7020"). InnerVolumeSpecName "kube-api-access-rcp2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.714299 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "93d1bb45-b84a-46f0-8c3b-e906382a7020" (UID: "93d1bb45-b84a-46f0-8c3b-e906382a7020"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.740757 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.741867 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.750142 4637 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.750173 4637 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.750182 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcp2d\" (UniqueName: \"kubernetes.io/projected/93d1bb45-b84a-46f0-8c3b-e906382a7020-kube-api-access-rcp2d\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.750192 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.750208 4637 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93d1bb45-b84a-46f0-8c3b-e906382a7020-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.759511 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.796229 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.796084 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93d1bb45-b84a-46f0-8c3b-e906382a7020" (UID: "93d1bb45-b84a-46f0-8c3b-e906382a7020"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.828669 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-config-data" (OuterVolumeSpecName: "config-data") pod "93d1bb45-b84a-46f0-8c3b-e906382a7020" (UID: "93d1bb45-b84a-46f0-8c3b-e906382a7020"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.852264 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:44 crc kubenswrapper[4637]: I1201 15:06:44.852299 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d1bb45-b84a-46f0-8c3b-e906382a7020-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.474830 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.476168 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93d1bb45-b84a-46f0-8c3b-e906382a7020","Type":"ContainerDied","Data":"8bb64f057debdddd8c8ceb564f67d2c51803f26852b2c3cf503ce912c7948c9d"} Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.476220 4637 scope.go:117] "RemoveContainer" containerID="b93dd185467caff69ed6d745379f87d877abe036dd7e7af7c83962eb3c9128a7" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.543533 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.554326 4637 scope.go:117] "RemoveContainer" containerID="1f79e2f4ec8a872fd15662df143ab7d0e336d047974b02ded57bb7f5932355da" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.561422 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.570605 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.582606 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:45 crc kubenswrapper[4637]: E1201 15:06:45.583187 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="ceilometer-central-agent" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.583206 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="ceilometer-central-agent" Dec 01 15:06:45 crc kubenswrapper[4637]: E1201 15:06:45.583246 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="ceilometer-notification-agent" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.583256 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="ceilometer-notification-agent" Dec 01 15:06:45 crc kubenswrapper[4637]: E1201 15:06:45.583273 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="sg-core" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.583281 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="sg-core" Dec 01 15:06:45 crc kubenswrapper[4637]: E1201 15:06:45.583298 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="proxy-httpd" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.583306 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="proxy-httpd" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.583564 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="ceilometer-central-agent" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.583581 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="proxy-httpd" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.583607 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="ceilometer-notification-agent" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.583619 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" containerName="sg-core" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.585760 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.592851 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.593027 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.593132 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.596972 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.627060 4637 scope.go:117] "RemoveContainer" containerID="c00c8962aa1af7ce8804f4991e9f57221bcad536881988a1f51fce3d44b3e754" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.663104 4637 scope.go:117] "RemoveContainer" containerID="7e3b9ba18cb546a8aadb454296bf2fe279b01c336934492a27610ae0fb9a8251" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.678636 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-log-httpd\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.678708 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.678752 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4d8\" (UniqueName: \"kubernetes.io/projected/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-kube-api-access-9w4d8\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.678781 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.678809 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-config-data\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.678829 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-run-httpd\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.678854 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.678914 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-scripts\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.781522 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-scripts\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.781597 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-log-httpd\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.781659 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.781704 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4d8\" (UniqueName: \"kubernetes.io/projected/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-kube-api-access-9w4d8\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.781753 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.781785 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-config-data\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.781819 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-run-httpd\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.781846 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.782833 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-log-httpd\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.783203 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d1bb45-b84a-46f0-8c3b-e906382a7020" path="/var/lib/kubelet/pods/93d1bb45-b84a-46f0-8c3b-e906382a7020/volumes" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.784542 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-run-httpd\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.788075 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-config-data\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.788219 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-scripts\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.802361 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.803131 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.806393 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4d8\" (UniqueName: \"kubernetes.io/projected/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-kube-api-access-9w4d8\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.814847 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " pod="openstack/ceilometer-0" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.823160 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.823155 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 15:06:45 crc kubenswrapper[4637]: I1201 15:06:45.923478 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:06:46 crc kubenswrapper[4637]: I1201 15:06:46.510343 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:06:46 crc kubenswrapper[4637]: W1201 15:06:46.521799 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d44a33_a1ba_40c7_a0d6_ebb0fb531b45.slice/crio-ccb0d324e3c4f023a453e25b062f5f173a6cac49e504a6c6efcccbe3115d9f98 WatchSource:0}: Error finding container ccb0d324e3c4f023a453e25b062f5f173a6cac49e504a6c6efcccbe3115d9f98: Status 404 returned error can't find the container with id ccb0d324e3c4f023a453e25b062f5f173a6cac49e504a6c6efcccbe3115d9f98 Dec 01 15:06:47 crc kubenswrapper[4637]: I1201 15:06:47.497104 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerStarted","Data":"ccb0d324e3c4f023a453e25b062f5f173a6cac49e504a6c6efcccbe3115d9f98"} Dec 01 15:06:48 crc kubenswrapper[4637]: I1201 15:06:48.508552 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerStarted","Data":"523f77f9ee122c96c18a35fdfe43503bc55625e979c254d0ec00287053718772"} Dec 01 15:06:49 crc kubenswrapper[4637]: I1201 15:06:49.520547 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerStarted","Data":"68366e2a8ba39ac638252168c46747c9f9fecbe670b22d2e517867e18b79adf2"} Dec 01 15:06:50 crc kubenswrapper[4637]: I1201 15:06:50.536314 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerStarted","Data":"94e6414ef104ca49aa2d45a460e20fac089da5affdbf0651e5758ae5b8758aec"} Dec 01 15:06:50 crc kubenswrapper[4637]: I1201 15:06:50.746446 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.567785 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerStarted","Data":"ebe1bfd06e7cbf3b03c08ce998db21e5c33d265d08c45bc892047c7327299a2a"} Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.568409 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.571559 4637 generic.go:334] "Generic (PLEG): container finished" podID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerID="cb655c74ff501d29e7160a5e02cbd9f14abccb75a96496b717fd4e5f1fc2004e" exitCode=137 Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.571595 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3a6afdb-ff5e-4280-b516-fdb897da268f","Type":"ContainerDied","Data":"cb655c74ff501d29e7160a5e02cbd9f14abccb75a96496b717fd4e5f1fc2004e"} Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.571618 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3a6afdb-ff5e-4280-b516-fdb897da268f","Type":"ContainerDied","Data":"664bdbd8de5db415278a1f57604218357deade67bf4a4d40963fca27eb0e225b"} Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.571630 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="664bdbd8de5db415278a1f57604218357deade67bf4a4d40963fca27eb0e225b" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.595893 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7584841559999997 podStartE2EDuration="7.595875677s" podCreationTimestamp="2025-12-01 15:06:45 +0000 UTC" firstStartedPulling="2025-12-01 15:06:46.524359808 +0000 UTC m=+1257.042068626" lastFinishedPulling="2025-12-01 15:06:51.361751319 +0000 UTC m=+1261.879460147" observedRunningTime="2025-12-01 15:06:52.592345852 +0000 UTC m=+1263.110054680" watchObservedRunningTime="2025-12-01 15:06:52.595875677 +0000 UTC m=+1263.113584495" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.617958 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.770924 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kswhd\" (UniqueName: \"kubernetes.io/projected/c3a6afdb-ff5e-4280-b516-fdb897da268f-kube-api-access-kswhd\") pod \"c3a6afdb-ff5e-4280-b516-fdb897da268f\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.771534 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a6afdb-ff5e-4280-b516-fdb897da268f-logs\") pod \"c3a6afdb-ff5e-4280-b516-fdb897da268f\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.771604 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-combined-ca-bundle\") pod \"c3a6afdb-ff5e-4280-b516-fdb897da268f\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.771681 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-config-data\") pod \"c3a6afdb-ff5e-4280-b516-fdb897da268f\" (UID: \"c3a6afdb-ff5e-4280-b516-fdb897da268f\") " Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.771897 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a6afdb-ff5e-4280-b516-fdb897da268f-logs" (OuterVolumeSpecName: "logs") pod "c3a6afdb-ff5e-4280-b516-fdb897da268f" (UID: "c3a6afdb-ff5e-4280-b516-fdb897da268f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.772173 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3a6afdb-ff5e-4280-b516-fdb897da268f-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.805051 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3a6afdb-ff5e-4280-b516-fdb897da268f" (UID: "c3a6afdb-ff5e-4280-b516-fdb897da268f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.806213 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a6afdb-ff5e-4280-b516-fdb897da268f-kube-api-access-kswhd" (OuterVolumeSpecName: "kube-api-access-kswhd") pod "c3a6afdb-ff5e-4280-b516-fdb897da268f" (UID: "c3a6afdb-ff5e-4280-b516-fdb897da268f"). InnerVolumeSpecName "kube-api-access-kswhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.806230 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-config-data" (OuterVolumeSpecName: "config-data") pod "c3a6afdb-ff5e-4280-b516-fdb897da268f" (UID: "c3a6afdb-ff5e-4280-b516-fdb897da268f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.874042 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kswhd\" (UniqueName: \"kubernetes.io/projected/c3a6afdb-ff5e-4280-b516-fdb897da268f-kube-api-access-kswhd\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.874074 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:52 crc kubenswrapper[4637]: I1201 15:06:52.874085 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a6afdb-ff5e-4280-b516-fdb897da268f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.580439 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.621992 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.634124 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.648189 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:53 crc kubenswrapper[4637]: E1201 15:06:53.648734 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerName="nova-metadata-log" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.648761 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerName="nova-metadata-log" Dec 01 15:06:53 crc kubenswrapper[4637]: E1201 15:06:53.648783 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerName="nova-metadata-metadata" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.648794 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerName="nova-metadata-metadata" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.649045 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerName="nova-metadata-log" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.649066 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" containerName="nova-metadata-metadata" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.650365 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.654427 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.654648 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.697748 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.801978 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-config-data\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.802021 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmwt\" (UniqueName: \"kubernetes.io/projected/548020cf-ecfe-4200-b295-af12b510bd28-kube-api-access-bpmwt\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.802100 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.802137 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.802156 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548020cf-ecfe-4200-b295-af12b510bd28-logs\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.805769 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a6afdb-ff5e-4280-b516-fdb897da268f" path="/var/lib/kubelet/pods/c3a6afdb-ff5e-4280-b516-fdb897da268f/volumes" Dec 01 15:06:53 crc kubenswrapper[4637]: E1201 15:06:53.809325 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a6afdb_ff5e_4280_b516_fdb897da268f.slice\": RecentStats: unable to find data in memory cache]" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.903392 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.903483 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.903513 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548020cf-ecfe-4200-b295-af12b510bd28-logs\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.903633 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-config-data\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.903657 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmwt\" (UniqueName: \"kubernetes.io/projected/548020cf-ecfe-4200-b295-af12b510bd28-kube-api-access-bpmwt\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.903978 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548020cf-ecfe-4200-b295-af12b510bd28-logs\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.914314 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.914364 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.914866 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-config-data\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:53 crc kubenswrapper[4637]: I1201 15:06:53.925797 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmwt\" (UniqueName: \"kubernetes.io/projected/548020cf-ecfe-4200-b295-af12b510bd28-kube-api-access-bpmwt\") pod \"nova-metadata-0\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " pod="openstack/nova-metadata-0" Dec 01 15:06:54 crc kubenswrapper[4637]: I1201 15:06:54.095360 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:06:54 crc kubenswrapper[4637]: I1201 15:06:54.648146 4637 generic.go:334] "Generic (PLEG): container finished" podID="d1775b92-4bcb-4709-a299-5b116ed03f37" containerID="775bf6508043d7a7e09c9c877927ad79135730eb5ed2584635ccba435544bf6e" exitCode=137 Dec 01 15:06:54 crc kubenswrapper[4637]: I1201 15:06:54.648550 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1775b92-4bcb-4709-a299-5b116ed03f37","Type":"ContainerDied","Data":"775bf6508043d7a7e09c9c877927ad79135730eb5ed2584635ccba435544bf6e"} Dec 01 15:06:54 crc kubenswrapper[4637]: I1201 15:06:54.764538 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 15:06:54 crc kubenswrapper[4637]: I1201 15:06:54.766516 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 15:06:54 crc kubenswrapper[4637]: I1201 15:06:54.784323 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 15:06:54 crc kubenswrapper[4637]: I1201 15:06:54.801978 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 15:06:54 crc kubenswrapper[4637]: I1201 15:06:54.939367 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.080578 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.190260 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-combined-ca-bundle\") pod \"d1775b92-4bcb-4709-a299-5b116ed03f37\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.190479 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpvcg\" (UniqueName: \"kubernetes.io/projected/d1775b92-4bcb-4709-a299-5b116ed03f37-kube-api-access-bpvcg\") pod \"d1775b92-4bcb-4709-a299-5b116ed03f37\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.190544 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-config-data\") pod \"d1775b92-4bcb-4709-a299-5b116ed03f37\" (UID: \"d1775b92-4bcb-4709-a299-5b116ed03f37\") " Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.195231 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1775b92-4bcb-4709-a299-5b116ed03f37-kube-api-access-bpvcg" (OuterVolumeSpecName: "kube-api-access-bpvcg") pod "d1775b92-4bcb-4709-a299-5b116ed03f37" (UID: "d1775b92-4bcb-4709-a299-5b116ed03f37"). InnerVolumeSpecName "kube-api-access-bpvcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.221029 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1775b92-4bcb-4709-a299-5b116ed03f37" (UID: "d1775b92-4bcb-4709-a299-5b116ed03f37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.224645 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-config-data" (OuterVolumeSpecName: "config-data") pod "d1775b92-4bcb-4709-a299-5b116ed03f37" (UID: "d1775b92-4bcb-4709-a299-5b116ed03f37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.300030 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.300468 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpvcg\" (UniqueName: \"kubernetes.io/projected/d1775b92-4bcb-4709-a299-5b116ed03f37-kube-api-access-bpvcg\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.300485 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1775b92-4bcb-4709-a299-5b116ed03f37-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.662312 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548020cf-ecfe-4200-b295-af12b510bd28","Type":"ContainerStarted","Data":"76cc11e37de29f42305478bcf2956f22822af5af2f5365d3144547b37933733f"} Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.662361 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548020cf-ecfe-4200-b295-af12b510bd28","Type":"ContainerStarted","Data":"02d02f0c8778c4b352e5b5d88dee1b0d0ff2c5bbe6c79ce312c634c352cea4a0"} Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.662373 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548020cf-ecfe-4200-b295-af12b510bd28","Type":"ContainerStarted","Data":"0364885f372aaadc8668c10405040ff454ca3322520ba6e3e0951ebc0d69087e"} Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.666292 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.668139 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1775b92-4bcb-4709-a299-5b116ed03f37","Type":"ContainerDied","Data":"621a7f4dc99257674455fc13b3b74a972a44c9a75584498536feee6996ee1323"} Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.668182 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.668203 4637 scope.go:117] "RemoveContainer" containerID="775bf6508043d7a7e09c9c877927ad79135730eb5ed2584635ccba435544bf6e" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.675320 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.700585 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.700559208 podStartE2EDuration="2.700559208s" podCreationTimestamp="2025-12-01 15:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:55.686919608 +0000 UTC m=+1266.204628446" watchObservedRunningTime="2025-12-01 15:06:55.700559208 +0000 UTC m=+1266.218268126" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.745100 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.766206 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.783210 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1775b92-4bcb-4709-a299-5b116ed03f37" path="/var/lib/kubelet/pods/d1775b92-4bcb-4709-a299-5b116ed03f37/volumes" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.783785 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:55 crc kubenswrapper[4637]: E1201 15:06:55.784121 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1775b92-4bcb-4709-a299-5b116ed03f37" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.784137 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1775b92-4bcb-4709-a299-5b116ed03f37" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.784327 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1775b92-4bcb-4709-a299-5b116ed03f37" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.784986 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.789796 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.790061 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.790363 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.816395 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.918939 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.918989 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.919039 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.919095 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:55 crc kubenswrapper[4637]: I1201 15:06:55.919112 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x48dh\" (UniqueName: \"kubernetes.io/projected/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-kube-api-access-x48dh\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.020532 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.020581 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x48dh\" (UniqueName: \"kubernetes.io/projected/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-kube-api-access-x48dh\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.020677 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.020710 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.020755 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.039557 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.044244 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.046509 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.059405 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.061171 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-tn4rb"] Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.063010 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x48dh\" (UniqueName: \"kubernetes.io/projected/cd4d95fc-d5be-40c9-bfae-3e1afaa2722d-kube-api-access-x48dh\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.064756 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.113997 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-tn4rb"] Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.123097 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-config\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.123168 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.123231 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gmk6\" (UniqueName: \"kubernetes.io/projected/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-kube-api-access-9gmk6\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.123251 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.123273 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.123301 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.128997 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.229144 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.229235 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gmk6\" (UniqueName: \"kubernetes.io/projected/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-kube-api-access-9gmk6\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.229255 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.229276 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.230532 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.231122 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.232294 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.232401 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.232572 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-config\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.233370 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-config\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.233978 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.263483 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gmk6\" (UniqueName: \"kubernetes.io/projected/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-kube-api-access-9gmk6\") pod \"dnsmasq-dns-5c7b6c5df9-tn4rb\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.460731 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:56 crc kubenswrapper[4637]: I1201 15:06:56.875698 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 15:06:57 crc kubenswrapper[4637]: I1201 15:06:57.172832 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-tn4rb"] Dec 01 15:06:57 crc kubenswrapper[4637]: I1201 15:06:57.710009 4637 generic.go:334] "Generic (PLEG): container finished" podID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" containerID="6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a" exitCode=0 Dec 01 15:06:57 crc kubenswrapper[4637]: I1201 15:06:57.710071 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" event={"ID":"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc","Type":"ContainerDied","Data":"6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a"} Dec 01 15:06:57 crc kubenswrapper[4637]: I1201 15:06:57.710402 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" event={"ID":"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc","Type":"ContainerStarted","Data":"71609ba6ca05fc10e3c50e5bc0d163c3ea332538278cb6330173973e3b0dc6f4"} Dec 01 15:06:57 crc kubenswrapper[4637]: I1201 15:06:57.711998 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d","Type":"ContainerStarted","Data":"2b52f9f53198dcb7663f262396ce393c17ef34f17a784c5a78e24b0d6fe103c3"} Dec 01 15:06:57 crc kubenswrapper[4637]: I1201 15:06:57.712041 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd4d95fc-d5be-40c9-bfae-3e1afaa2722d","Type":"ContainerStarted","Data":"58fea4aae9215c3364a584fa09648ee2d16db6fdf03fe5ab07bfa39aa5bf850b"} Dec 01 15:06:57 crc kubenswrapper[4637]: I1201 15:06:57.798696 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.798673247 podStartE2EDuration="2.798673247s" podCreationTimestamp="2025-12-01 15:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:57.781068411 +0000 UTC m=+1268.298777239" watchObservedRunningTime="2025-12-01 15:06:57.798673247 +0000 UTC m=+1268.316382075" Dec 01 15:06:58 crc kubenswrapper[4637]: I1201 15:06:58.744422 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" event={"ID":"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc","Type":"ContainerStarted","Data":"f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd"} Dec 01 15:06:58 crc kubenswrapper[4637]: I1201 15:06:58.747331 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:06:58 crc kubenswrapper[4637]: I1201 15:06:58.784612 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" podStartSLOduration=2.784596288 podStartE2EDuration="2.784596288s" podCreationTimestamp="2025-12-01 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:06:58.772961364 +0000 UTC m=+1269.290670192" watchObservedRunningTime="2025-12-01 15:06:58.784596288 +0000 UTC m=+1269.302305116" Dec 01 15:06:59 crc kubenswrapper[4637]: I1201 15:06:59.096352 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 15:06:59 crc kubenswrapper[4637]: I1201 15:06:59.096423 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 15:06:59 crc kubenswrapper[4637]: I1201 15:06:59.569115 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:06:59 crc kubenswrapper[4637]: I1201 15:06:59.569353 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-log" containerID="cri-o://b0e4c925d06c8880cfbbfcc323339edab1efcf3f6447b32187bee2a2ed963e01" gracePeriod=30 Dec 01 15:06:59 crc kubenswrapper[4637]: I1201 15:06:59.569495 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-api" containerID="cri-o://28aea11165bcbc933d3ff0801bf06247c73f06a405b2f4a18be5a22eb2a152d2" gracePeriod=30 Dec 01 15:06:59 crc kubenswrapper[4637]: I1201 15:06:59.755639 4637 generic.go:334] "Generic (PLEG): container finished" podID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerID="b0e4c925d06c8880cfbbfcc323339edab1efcf3f6447b32187bee2a2ed963e01" exitCode=143 Dec 01 15:06:59 crc kubenswrapper[4637]: I1201 15:06:59.756348 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65cac97b-b9c2-4125-8d93-2d4034cf68a5","Type":"ContainerDied","Data":"b0e4c925d06c8880cfbbfcc323339edab1efcf3f6447b32187bee2a2ed963e01"} Dec 01 15:07:00 crc kubenswrapper[4637]: I1201 15:07:00.751332 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:00 crc kubenswrapper[4637]: I1201 15:07:00.755504 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="ceilometer-central-agent" containerID="cri-o://523f77f9ee122c96c18a35fdfe43503bc55625e979c254d0ec00287053718772" gracePeriod=30 Dec 01 15:07:00 crc kubenswrapper[4637]: I1201 15:07:00.755654 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="proxy-httpd" containerID="cri-o://ebe1bfd06e7cbf3b03c08ce998db21e5c33d265d08c45bc892047c7327299a2a" gracePeriod=30 Dec 01 15:07:00 crc kubenswrapper[4637]: I1201 15:07:00.755749 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="ceilometer-notification-agent" containerID="cri-o://68366e2a8ba39ac638252168c46747c9f9fecbe670b22d2e517867e18b79adf2" gracePeriod=30 Dec 01 15:07:00 crc kubenswrapper[4637]: I1201 15:07:00.756183 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="sg-core" containerID="cri-o://94e6414ef104ca49aa2d45a460e20fac089da5affdbf0651e5758ae5b8758aec" gracePeriod=30 Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.130096 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.822363 4637 generic.go:334] "Generic (PLEG): container finished" podID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerID="ebe1bfd06e7cbf3b03c08ce998db21e5c33d265d08c45bc892047c7327299a2a" exitCode=0 Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.822696 4637 generic.go:334] "Generic (PLEG): container finished" podID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerID="94e6414ef104ca49aa2d45a460e20fac089da5affdbf0651e5758ae5b8758aec" exitCode=2 Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.822706 4637 generic.go:334] "Generic (PLEG): container finished" podID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerID="68366e2a8ba39ac638252168c46747c9f9fecbe670b22d2e517867e18b79adf2" exitCode=0 Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.822715 4637 generic.go:334] "Generic (PLEG): container finished" podID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerID="523f77f9ee122c96c18a35fdfe43503bc55625e979c254d0ec00287053718772" exitCode=0 Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.822496 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerDied","Data":"ebe1bfd06e7cbf3b03c08ce998db21e5c33d265d08c45bc892047c7327299a2a"} Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.822798 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerDied","Data":"94e6414ef104ca49aa2d45a460e20fac089da5affdbf0651e5758ae5b8758aec"} Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.822819 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerDied","Data":"68366e2a8ba39ac638252168c46747c9f9fecbe670b22d2e517867e18b79adf2"} Dec 01 15:07:01 crc kubenswrapper[4637]: I1201 15:07:01.822832 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerDied","Data":"523f77f9ee122c96c18a35fdfe43503bc55625e979c254d0ec00287053718772"} Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.126239 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.190366 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-log-httpd\") pod \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.191972 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" (UID: "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.193056 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-config-data\") pod \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.193135 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w4d8\" (UniqueName: \"kubernetes.io/projected/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-kube-api-access-9w4d8\") pod \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.193276 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-scripts\") pod \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.193321 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-combined-ca-bundle\") pod \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.193410 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-sg-core-conf-yaml\") pod \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.193454 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-run-httpd\") pod \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.193506 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-ceilometer-tls-certs\") pod \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\" (UID: \"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45\") " Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.199273 4637 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.209536 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" (UID: "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.214044 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-kube-api-access-9w4d8" (OuterVolumeSpecName: "kube-api-access-9w4d8") pod "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" (UID: "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45"). InnerVolumeSpecName "kube-api-access-9w4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.219249 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-scripts" (OuterVolumeSpecName: "scripts") pod "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" (UID: "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.284061 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" (UID: "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.303415 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.303725 4637 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.303796 4637 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.303866 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w4d8\" (UniqueName: \"kubernetes.io/projected/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-kube-api-access-9w4d8\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.335620 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" (UID: "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.363132 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" (UID: "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.395337 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-config-data" (OuterVolumeSpecName: "config-data") pod "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" (UID: "45d44a33-a1ba-40c7-a0d6-ebb0fb531b45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.405619 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.405647 4637 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.405659 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.835378 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45d44a33-a1ba-40c7-a0d6-ebb0fb531b45","Type":"ContainerDied","Data":"ccb0d324e3c4f023a453e25b062f5f173a6cac49e504a6c6efcccbe3115d9f98"} Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.835426 4637 scope.go:117] "RemoveContainer" containerID="ebe1bfd06e7cbf3b03c08ce998db21e5c33d265d08c45bc892047c7327299a2a" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.835569 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.846582 4637 generic.go:334] "Generic (PLEG): container finished" podID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerID="28aea11165bcbc933d3ff0801bf06247c73f06a405b2f4a18be5a22eb2a152d2" exitCode=0 Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.846623 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65cac97b-b9c2-4125-8d93-2d4034cf68a5","Type":"ContainerDied","Data":"28aea11165bcbc933d3ff0801bf06247c73f06a405b2f4a18be5a22eb2a152d2"} Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.872891 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.883533 4637 scope.go:117] "RemoveContainer" containerID="94e6414ef104ca49aa2d45a460e20fac089da5affdbf0651e5758ae5b8758aec" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.896136 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.904488 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:02 crc kubenswrapper[4637]: E1201 15:07:02.904853 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="ceilometer-notification-agent" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.904873 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="ceilometer-notification-agent" Dec 01 15:07:02 crc kubenswrapper[4637]: E1201 15:07:02.904886 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="sg-core" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.904892 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="sg-core" Dec 01 15:07:02 crc kubenswrapper[4637]: E1201 15:07:02.904905 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="ceilometer-central-agent" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.904911 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="ceilometer-central-agent" Dec 01 15:07:02 crc kubenswrapper[4637]: E1201 15:07:02.904953 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="proxy-httpd" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.904959 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="proxy-httpd" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.905418 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="proxy-httpd" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.905451 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="ceilometer-notification-agent" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.905468 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="sg-core" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.905479 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" containerName="ceilometer-central-agent" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.908900 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.920654 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.922670 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.924039 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.933836 4637 scope.go:117] "RemoveContainer" containerID="68366e2a8ba39ac638252168c46747c9f9fecbe670b22d2e517867e18b79adf2" Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.943734 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:02 crc kubenswrapper[4637]: I1201 15:07:02.978371 4637 scope.go:117] "RemoveContainer" containerID="523f77f9ee122c96c18a35fdfe43503bc55625e979c254d0ec00287053718772" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.034496 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-scripts\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.034548 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.034612 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-log-httpd\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.034688 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dp7\" (UniqueName: \"kubernetes.io/projected/66d735be-b6b1-4cd3-bb09-a959dc293ef9-kube-api-access-74dp7\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.034725 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.034739 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.034773 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-config-data\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.034787 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-run-httpd\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.137080 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-scripts\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.137264 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.137412 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-log-httpd\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.137516 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dp7\" (UniqueName: \"kubernetes.io/projected/66d735be-b6b1-4cd3-bb09-a959dc293ef9-kube-api-access-74dp7\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.137593 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.137635 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.137679 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-config-data\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.137719 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-run-httpd\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.138198 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-log-httpd\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.138490 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-run-httpd\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.144120 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.147913 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.153746 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.154439 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-config-data\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.154698 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-scripts\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.155499 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dp7\" (UniqueName: \"kubernetes.io/projected/66d735be-b6b1-4cd3-bb09-a959dc293ef9-kube-api-access-74dp7\") pod \"ceilometer-0\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.243528 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.386211 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.455641 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-config-data\") pod \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.455850 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-combined-ca-bundle\") pod \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.455894 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcvnd\" (UniqueName: \"kubernetes.io/projected/65cac97b-b9c2-4125-8d93-2d4034cf68a5-kube-api-access-kcvnd\") pod \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.455945 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65cac97b-b9c2-4125-8d93-2d4034cf68a5-logs\") pod \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\" (UID: \"65cac97b-b9c2-4125-8d93-2d4034cf68a5\") " Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.456914 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65cac97b-b9c2-4125-8d93-2d4034cf68a5-logs" (OuterVolumeSpecName: "logs") pod "65cac97b-b9c2-4125-8d93-2d4034cf68a5" (UID: "65cac97b-b9c2-4125-8d93-2d4034cf68a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.462116 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65cac97b-b9c2-4125-8d93-2d4034cf68a5-kube-api-access-kcvnd" (OuterVolumeSpecName: "kube-api-access-kcvnd") pod "65cac97b-b9c2-4125-8d93-2d4034cf68a5" (UID: "65cac97b-b9c2-4125-8d93-2d4034cf68a5"). InnerVolumeSpecName "kube-api-access-kcvnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.488683 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65cac97b-b9c2-4125-8d93-2d4034cf68a5" (UID: "65cac97b-b9c2-4125-8d93-2d4034cf68a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.491905 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-config-data" (OuterVolumeSpecName: "config-data") pod "65cac97b-b9c2-4125-8d93-2d4034cf68a5" (UID: "65cac97b-b9c2-4125-8d93-2d4034cf68a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.558310 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.558381 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cac97b-b9c2-4125-8d93-2d4034cf68a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.558397 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcvnd\" (UniqueName: \"kubernetes.io/projected/65cac97b-b9c2-4125-8d93-2d4034cf68a5-kube-api-access-kcvnd\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.558411 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65cac97b-b9c2-4125-8d93-2d4034cf68a5-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.646815 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.736023 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.782833 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d44a33-a1ba-40c7-a0d6-ebb0fb531b45" path="/var/lib/kubelet/pods/45d44a33-a1ba-40c7-a0d6-ebb0fb531b45/volumes" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.859448 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65cac97b-b9c2-4125-8d93-2d4034cf68a5","Type":"ContainerDied","Data":"4e10708cc44cde1f11b2a05f4dda4d2c05c58e209f21f2bbb5a619c2a432300a"} Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.859500 4637 scope.go:117] "RemoveContainer" containerID="28aea11165bcbc933d3ff0801bf06247c73f06a405b2f4a18be5a22eb2a152d2" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.859609 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.864524 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerStarted","Data":"5976e9bd572595f2a46bbafbdcebe91801c4bd8ce86ccf66c7092f4795da8dbf"} Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.894077 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.897653 4637 scope.go:117] "RemoveContainer" containerID="b0e4c925d06c8880cfbbfcc323339edab1efcf3f6447b32187bee2a2ed963e01" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.921838 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.948497 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:03 crc kubenswrapper[4637]: E1201 15:07:03.948991 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-api" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.949083 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-api" Dec 01 15:07:03 crc kubenswrapper[4637]: E1201 15:07:03.949182 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-log" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.949262 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-log" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.949609 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-log" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.949699 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" containerName="nova-api-api" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.950849 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.960399 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.960636 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.961394 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 15:07:03 crc kubenswrapper[4637]: I1201 15:07:03.969179 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.073399 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-logs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.073501 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.073548 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.073568 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6pn2\" (UniqueName: \"kubernetes.io/projected/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-kube-api-access-s6pn2\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.073636 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.073657 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-config-data\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.097115 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.097164 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.176063 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.176112 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-config-data\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.176145 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-logs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.176220 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.176265 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.176281 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6pn2\" (UniqueName: \"kubernetes.io/projected/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-kube-api-access-s6pn2\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.176642 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-logs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.181486 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.181889 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.184029 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-config-data\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.184579 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.201666 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6pn2\" (UniqueName: \"kubernetes.io/projected/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-kube-api-access-s6pn2\") pod \"nova-api-0\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.285623 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.873006 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:04 crc kubenswrapper[4637]: W1201 15:07:04.887096 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8883a1af_5436_4fd3_bd2b_6594a0e74f9d.slice/crio-5b8e0e5805694246649d55cca4992abfd740f5e38316afa0c30542ccbf1f63de WatchSource:0}: Error finding container 5b8e0e5805694246649d55cca4992abfd740f5e38316afa0c30542ccbf1f63de: Status 404 returned error can't find the container with id 5b8e0e5805694246649d55cca4992abfd740f5e38316afa0c30542ccbf1f63de Dec 01 15:07:04 crc kubenswrapper[4637]: I1201 15:07:04.953684 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerStarted","Data":"ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223"} Dec 01 15:07:05 crc kubenswrapper[4637]: I1201 15:07:05.110132 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 15:07:05 crc kubenswrapper[4637]: I1201 15:07:05.110497 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:07:05 crc kubenswrapper[4637]: I1201 15:07:05.783685 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65cac97b-b9c2-4125-8d93-2d4034cf68a5" path="/var/lib/kubelet/pods/65cac97b-b9c2-4125-8d93-2d4034cf68a5/volumes" Dec 01 15:07:05 crc kubenswrapper[4637]: I1201 15:07:05.963646 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerStarted","Data":"5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56"} Dec 01 15:07:05 crc kubenswrapper[4637]: I1201 15:07:05.964987 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8883a1af-5436-4fd3-bd2b-6594a0e74f9d","Type":"ContainerStarted","Data":"4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217"} Dec 01 15:07:05 crc kubenswrapper[4637]: I1201 15:07:05.965015 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8883a1af-5436-4fd3-bd2b-6594a0e74f9d","Type":"ContainerStarted","Data":"215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9"} Dec 01 15:07:05 crc kubenswrapper[4637]: I1201 15:07:05.965026 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8883a1af-5436-4fd3-bd2b-6594a0e74f9d","Type":"ContainerStarted","Data":"5b8e0e5805694246649d55cca4992abfd740f5e38316afa0c30542ccbf1f63de"} Dec 01 15:07:05 crc kubenswrapper[4637]: I1201 15:07:05.982866 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.982841249 podStartE2EDuration="2.982841249s" podCreationTimestamp="2025-12-01 15:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:07:05.981920575 +0000 UTC m=+1276.499629403" watchObservedRunningTime="2025-12-01 15:07:05.982841249 +0000 UTC m=+1276.500550077" Dec 01 15:07:06 crc kubenswrapper[4637]: I1201 15:07:06.130143 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:07:06 crc kubenswrapper[4637]: I1201 15:07:06.155039 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:07:06 crc kubenswrapper[4637]: I1201 15:07:06.463171 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:07:06 crc kubenswrapper[4637]: I1201 15:07:06.550685 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-gbnmb"] Dec 01 15:07:06 crc kubenswrapper[4637]: I1201 15:07:06.550925 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" podUID="bca325c0-0fea-403c-8354-654b1c6167a3" containerName="dnsmasq-dns" containerID="cri-o://db919e85951f0b5c744e820b05d43e85696840ac52d8cbfbeea5685c602a4d3d" gracePeriod=10 Dec 01 15:07:06 crc kubenswrapper[4637]: I1201 15:07:06.995675 4637 generic.go:334] "Generic (PLEG): container finished" podID="bca325c0-0fea-403c-8354-654b1c6167a3" containerID="db919e85951f0b5c744e820b05d43e85696840ac52d8cbfbeea5685c602a4d3d" exitCode=0 Dec 01 15:07:06 crc kubenswrapper[4637]: I1201 15:07:06.996536 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" event={"ID":"bca325c0-0fea-403c-8354-654b1c6167a3","Type":"ContainerDied","Data":"db919e85951f0b5c744e820b05d43e85696840ac52d8cbfbeea5685c602a4d3d"} Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.054055 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.372447 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.407613 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ccqz9"] Dec 01 15:07:07 crc kubenswrapper[4637]: E1201 15:07:07.408089 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca325c0-0fea-403c-8354-654b1c6167a3" containerName="init" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.408108 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca325c0-0fea-403c-8354-654b1c6167a3" containerName="init" Dec 01 15:07:07 crc kubenswrapper[4637]: E1201 15:07:07.408139 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca325c0-0fea-403c-8354-654b1c6167a3" containerName="dnsmasq-dns" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.408146 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca325c0-0fea-403c-8354-654b1c6167a3" containerName="dnsmasq-dns" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.408318 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca325c0-0fea-403c-8354-654b1c6167a3" containerName="dnsmasq-dns" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.408967 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.411460 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.411604 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.439033 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ccqz9"] Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.475697 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxr6r\" (UniqueName: \"kubernetes.io/projected/bca325c0-0fea-403c-8354-654b1c6167a3-kube-api-access-hxr6r\") pod \"bca325c0-0fea-403c-8354-654b1c6167a3\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.475895 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-config\") pod \"bca325c0-0fea-403c-8354-654b1c6167a3\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.475982 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-sb\") pod \"bca325c0-0fea-403c-8354-654b1c6167a3\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.476057 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-swift-storage-0\") pod \"bca325c0-0fea-403c-8354-654b1c6167a3\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.476158 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-nb\") pod \"bca325c0-0fea-403c-8354-654b1c6167a3\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.476195 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-svc\") pod \"bca325c0-0fea-403c-8354-654b1c6167a3\" (UID: \"bca325c0-0fea-403c-8354-654b1c6167a3\") " Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.476467 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.476526 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj4j4\" (UniqueName: \"kubernetes.io/projected/ba29815f-9b56-490a-ad00-b358c7328ec9-kube-api-access-kj4j4\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.476563 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-config-data\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.476714 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-scripts\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.494344 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca325c0-0fea-403c-8354-654b1c6167a3-kube-api-access-hxr6r" (OuterVolumeSpecName: "kube-api-access-hxr6r") pod "bca325c0-0fea-403c-8354-654b1c6167a3" (UID: "bca325c0-0fea-403c-8354-654b1c6167a3"). InnerVolumeSpecName "kube-api-access-hxr6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.576033 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bca325c0-0fea-403c-8354-654b1c6167a3" (UID: "bca325c0-0fea-403c-8354-654b1c6167a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.582169 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-scripts\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.582252 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.582287 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj4j4\" (UniqueName: \"kubernetes.io/projected/ba29815f-9b56-490a-ad00-b358c7328ec9-kube-api-access-kj4j4\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.582306 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-config-data\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.582400 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxr6r\" (UniqueName: \"kubernetes.io/projected/bca325c0-0fea-403c-8354-654b1c6167a3-kube-api-access-hxr6r\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.582416 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.586808 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-config-data\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.588810 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.594430 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-scripts\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.634616 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj4j4\" (UniqueName: \"kubernetes.io/projected/ba29815f-9b56-490a-ad00-b358c7328ec9-kube-api-access-kj4j4\") pod \"nova-cell1-cell-mapping-ccqz9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.635475 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-config" (OuterVolumeSpecName: "config") pod "bca325c0-0fea-403c-8354-654b1c6167a3" (UID: "bca325c0-0fea-403c-8354-654b1c6167a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.651715 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bca325c0-0fea-403c-8354-654b1c6167a3" (UID: "bca325c0-0fea-403c-8354-654b1c6167a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.652429 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bca325c0-0fea-403c-8354-654b1c6167a3" (UID: "bca325c0-0fea-403c-8354-654b1c6167a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.654537 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bca325c0-0fea-403c-8354-654b1c6167a3" (UID: "bca325c0-0fea-403c-8354-654b1c6167a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.684678 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.684721 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.684734 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.684746 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bca325c0-0fea-403c-8354-654b1c6167a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:07 crc kubenswrapper[4637]: I1201 15:07:07.728063 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:08 crc kubenswrapper[4637]: I1201 15:07:08.009400 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" event={"ID":"bca325c0-0fea-403c-8354-654b1c6167a3","Type":"ContainerDied","Data":"ed406c367aa48ef1479036f4d2b4abdbf152f9130a3da6ad1bcf3999e91b1826"} Dec 01 15:07:08 crc kubenswrapper[4637]: I1201 15:07:08.009767 4637 scope.go:117] "RemoveContainer" containerID="db919e85951f0b5c744e820b05d43e85696840ac52d8cbfbeea5685c602a4d3d" Dec 01 15:07:08 crc kubenswrapper[4637]: I1201 15:07:08.009922 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" Dec 01 15:07:08 crc kubenswrapper[4637]: I1201 15:07:08.040725 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerStarted","Data":"a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c"} Dec 01 15:07:08 crc kubenswrapper[4637]: I1201 15:07:08.067512 4637 scope.go:117] "RemoveContainer" containerID="255d24f251f2253fd42dcc3efa0c8a0922af793f95e693ce4c9c89061f18ce70" Dec 01 15:07:08 crc kubenswrapper[4637]: I1201 15:07:08.142871 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-gbnmb"] Dec 01 15:07:08 crc kubenswrapper[4637]: I1201 15:07:08.148009 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-gbnmb"] Dec 01 15:07:08 crc kubenswrapper[4637]: I1201 15:07:08.293812 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ccqz9"] Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.052367 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ccqz9" event={"ID":"ba29815f-9b56-490a-ad00-b358c7328ec9","Type":"ContainerStarted","Data":"96fdcf9ed7c60cc3388ac2f415865a265fcd89ba46e4afc7a438d5c178f3835d"} Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.052730 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ccqz9" event={"ID":"ba29815f-9b56-490a-ad00-b358c7328ec9","Type":"ContainerStarted","Data":"51495fac7906c2107de368094468b913b34b4705051735b17bc1dbb40a9810e8"} Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.055720 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerStarted","Data":"e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665"} Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.055902 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="ceilometer-central-agent" containerID="cri-o://ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223" gracePeriod=30 Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.055940 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="proxy-httpd" containerID="cri-o://e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665" gracePeriod=30 Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.055970 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="sg-core" containerID="cri-o://a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c" gracePeriod=30 Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.055950 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="ceilometer-notification-agent" containerID="cri-o://5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56" gracePeriod=30 Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.056057 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.079136 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ccqz9" podStartSLOduration=2.079118972 podStartE2EDuration="2.079118972s" podCreationTimestamp="2025-12-01 15:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:07:09.070568351 +0000 UTC m=+1279.588277179" watchObservedRunningTime="2025-12-01 15:07:09.079118972 +0000 UTC m=+1279.596827800" Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.098582 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.64259659 podStartE2EDuration="7.098551408s" podCreationTimestamp="2025-12-01 15:07:02 +0000 UTC" firstStartedPulling="2025-12-01 15:07:03.740441545 +0000 UTC m=+1274.258150373" lastFinishedPulling="2025-12-01 15:07:08.196396363 +0000 UTC m=+1278.714105191" observedRunningTime="2025-12-01 15:07:09.092700039 +0000 UTC m=+1279.610408867" watchObservedRunningTime="2025-12-01 15:07:09.098551408 +0000 UTC m=+1279.616260236" Dec 01 15:07:09 crc kubenswrapper[4637]: I1201 15:07:09.781901 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca325c0-0fea-403c-8354-654b1c6167a3" path="/var/lib/kubelet/pods/bca325c0-0fea-403c-8354-654b1c6167a3/volumes" Dec 01 15:07:10 crc kubenswrapper[4637]: I1201 15:07:10.073305 4637 generic.go:334] "Generic (PLEG): container finished" podID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerID="e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665" exitCode=0 Dec 01 15:07:10 crc kubenswrapper[4637]: I1201 15:07:10.073350 4637 generic.go:334] "Generic (PLEG): container finished" podID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerID="a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c" exitCode=2 Dec 01 15:07:10 crc kubenswrapper[4637]: I1201 15:07:10.073361 4637 generic.go:334] "Generic (PLEG): container finished" podID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerID="5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56" exitCode=0 Dec 01 15:07:10 crc kubenswrapper[4637]: I1201 15:07:10.073584 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerDied","Data":"e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665"} Dec 01 15:07:10 crc kubenswrapper[4637]: I1201 15:07:10.073652 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerDied","Data":"a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c"} Dec 01 15:07:10 crc kubenswrapper[4637]: I1201 15:07:10.073663 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerDied","Data":"5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56"} Dec 01 15:07:12 crc kubenswrapper[4637]: I1201 15:07:12.009639 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-865f5d856f-gbnmb" podUID="bca325c0-0fea-403c-8354-654b1c6167a3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: i/o timeout" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.694558 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.768454 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-combined-ca-bundle\") pod \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.768508 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-log-httpd\") pod \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.768536 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-ceilometer-tls-certs\") pod \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.768609 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-run-httpd\") pod \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.768725 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-config-data\") pod \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.768755 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74dp7\" (UniqueName: \"kubernetes.io/projected/66d735be-b6b1-4cd3-bb09-a959dc293ef9-kube-api-access-74dp7\") pod \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.768823 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-sg-core-conf-yaml\") pod \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.768845 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-scripts\") pod \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\" (UID: \"66d735be-b6b1-4cd3-bb09-a959dc293ef9\") " Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.769660 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66d735be-b6b1-4cd3-bb09-a959dc293ef9" (UID: "66d735be-b6b1-4cd3-bb09-a959dc293ef9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.769966 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66d735be-b6b1-4cd3-bb09-a959dc293ef9" (UID: "66d735be-b6b1-4cd3-bb09-a959dc293ef9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.770528 4637 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.770552 4637 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66d735be-b6b1-4cd3-bb09-a959dc293ef9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.789901 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d735be-b6b1-4cd3-bb09-a959dc293ef9-kube-api-access-74dp7" (OuterVolumeSpecName: "kube-api-access-74dp7") pod "66d735be-b6b1-4cd3-bb09-a959dc293ef9" (UID: "66d735be-b6b1-4cd3-bb09-a959dc293ef9"). InnerVolumeSpecName "kube-api-access-74dp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.793182 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-scripts" (OuterVolumeSpecName: "scripts") pod "66d735be-b6b1-4cd3-bb09-a959dc293ef9" (UID: "66d735be-b6b1-4cd3-bb09-a959dc293ef9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.808850 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66d735be-b6b1-4cd3-bb09-a959dc293ef9" (UID: "66d735be-b6b1-4cd3-bb09-a959dc293ef9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.867071 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "66d735be-b6b1-4cd3-bb09-a959dc293ef9" (UID: "66d735be-b6b1-4cd3-bb09-a959dc293ef9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.872439 4637 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.873121 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.873133 4637 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.873412 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74dp7\" (UniqueName: \"kubernetes.io/projected/66d735be-b6b1-4cd3-bb09-a959dc293ef9-kube-api-access-74dp7\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.873740 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66d735be-b6b1-4cd3-bb09-a959dc293ef9" (UID: "66d735be-b6b1-4cd3-bb09-a959dc293ef9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.920674 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-config-data" (OuterVolumeSpecName: "config-data") pod "66d735be-b6b1-4cd3-bb09-a959dc293ef9" (UID: "66d735be-b6b1-4cd3-bb09-a959dc293ef9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.974896 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:13 crc kubenswrapper[4637]: I1201 15:07:13.974953 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d735be-b6b1-4cd3-bb09-a959dc293ef9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.104126 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.109358 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.112505 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.153033 4637 generic.go:334] "Generic (PLEG): container finished" podID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerID="ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223" exitCode=0 Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.155273 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.158549 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerDied","Data":"ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223"} Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.158640 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66d735be-b6b1-4cd3-bb09-a959dc293ef9","Type":"ContainerDied","Data":"5976e9bd572595f2a46bbafbdcebe91801c4bd8ce86ccf66c7092f4795da8dbf"} Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.158669 4637 scope.go:117] "RemoveContainer" containerID="e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.175227 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.239004 4637 scope.go:117] "RemoveContainer" containerID="a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.272843 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.285356 4637 scope.go:117] "RemoveContainer" containerID="5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.289084 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.305003 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.305098 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.354166 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.354870 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="ceilometer-central-agent" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.354900 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="ceilometer-central-agent" Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.354963 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="ceilometer-notification-agent" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.354972 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="ceilometer-notification-agent" Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.354978 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="sg-core" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.354984 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="sg-core" Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.355013 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="proxy-httpd" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.355020 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="proxy-httpd" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.355281 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="sg-core" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.355318 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="ceilometer-central-agent" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.355329 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="proxy-httpd" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.355343 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" containerName="ceilometer-notification-agent" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.355683 4637 scope.go:117] "RemoveContainer" containerID="ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.360708 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.363396 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.363651 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.364086 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.401790 4637 scope.go:117] "RemoveContainer" containerID="e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.403222 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.410460 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665\": container with ID starting with e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665 not found: ID does not exist" containerID="e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.410534 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665"} err="failed to get container status \"e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665\": rpc error: code = NotFound desc = could not find container \"e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665\": container with ID starting with e83cbe2d2415bf8261ac95e246df130c938c17ccb4cf8c0cae2082d5a659c665 not found: ID does not exist" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.410571 4637 scope.go:117] "RemoveContainer" containerID="a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c" Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.411011 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c\": container with ID starting with a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c not found: ID does not exist" containerID="a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.411042 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c"} err="failed to get container status \"a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c\": rpc error: code = NotFound desc = could not find container \"a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c\": container with ID starting with a313bfe10ba79e603ec70359b7291121f7fe773bc3cc9d2849cd37137b36f35c not found: ID does not exist" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.411059 4637 scope.go:117] "RemoveContainer" containerID="5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56" Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.411417 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56\": container with ID starting with 5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56 not found: ID does not exist" containerID="5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.411464 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56"} err="failed to get container status \"5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56\": rpc error: code = NotFound desc = could not find container \"5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56\": container with ID starting with 5c3a227f6ca7bfea5c0169abda5f255618731e86d96b91fe038fc861ad63cf56 not found: ID does not exist" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.411503 4637 scope.go:117] "RemoveContainer" containerID="ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223" Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.411748 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223\": container with ID starting with ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223 not found: ID does not exist" containerID="ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.411780 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223"} err="failed to get container status \"ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223\": rpc error: code = NotFound desc = could not find container \"ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223\": container with ID starting with ec8386abfe61b96b7225e177b83e20f8c6d19271cd65fbabf6529dd51676a223 not found: ID does not exist" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.489660 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.489728 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wc87\" (UniqueName: \"kubernetes.io/projected/cd466a3c-d503-4718-a059-1cba9c618b07-kube-api-access-2wc87\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.489747 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-scripts\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.489777 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.489799 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-config-data\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.489818 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.489887 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd466a3c-d503-4718-a059-1cba9c618b07-log-httpd\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.489962 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd466a3c-d503-4718-a059-1cba9c618b07-run-httpd\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: E1201 15:07:14.570731 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66d735be_b6b1_4cd3_bb09_a959dc293ef9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66d735be_b6b1_4cd3_bb09_a959dc293ef9.slice/crio-5976e9bd572595f2a46bbafbdcebe91801c4bd8ce86ccf66c7092f4795da8dbf\": RecentStats: unable to find data in memory cache]" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.591269 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd466a3c-d503-4718-a059-1cba9c618b07-log-httpd\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.591392 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd466a3c-d503-4718-a059-1cba9c618b07-run-httpd\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.591431 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.591476 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wc87\" (UniqueName: \"kubernetes.io/projected/cd466a3c-d503-4718-a059-1cba9c618b07-kube-api-access-2wc87\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.591504 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-scripts\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.591536 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.591557 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-config-data\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.591575 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.592864 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd466a3c-d503-4718-a059-1cba9c618b07-run-httpd\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.593193 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd466a3c-d503-4718-a059-1cba9c618b07-log-httpd\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.601467 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.601728 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.602062 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.606598 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-scripts\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.618431 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd466a3c-d503-4718-a059-1cba9c618b07-config-data\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.626645 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wc87\" (UniqueName: \"kubernetes.io/projected/cd466a3c-d503-4718-a059-1cba9c618b07-kube-api-access-2wc87\") pod \"ceilometer-0\" (UID: \"cd466a3c-d503-4718-a059-1cba9c618b07\") " pod="openstack/ceilometer-0" Dec 01 15:07:14 crc kubenswrapper[4637]: I1201 15:07:14.694676 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 15:07:15 crc kubenswrapper[4637]: I1201 15:07:15.169505 4637 generic.go:334] "Generic (PLEG): container finished" podID="ba29815f-9b56-490a-ad00-b358c7328ec9" containerID="96fdcf9ed7c60cc3388ac2f415865a265fcd89ba46e4afc7a438d5c178f3835d" exitCode=0 Dec 01 15:07:15 crc kubenswrapper[4637]: I1201 15:07:15.169596 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ccqz9" event={"ID":"ba29815f-9b56-490a-ad00-b358c7328ec9","Type":"ContainerDied","Data":"96fdcf9ed7c60cc3388ac2f415865a265fcd89ba46e4afc7a438d5c178f3835d"} Dec 01 15:07:15 crc kubenswrapper[4637]: I1201 15:07:15.340177 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:07:15 crc kubenswrapper[4637]: I1201 15:07:15.340860 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:07:15 crc kubenswrapper[4637]: I1201 15:07:15.428280 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 15:07:15 crc kubenswrapper[4637]: I1201 15:07:15.783057 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d735be-b6b1-4cd3-bb09-a959dc293ef9" path="/var/lib/kubelet/pods/66d735be-b6b1-4cd3-bb09-a959dc293ef9/volumes" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.186373 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd466a3c-d503-4718-a059-1cba9c618b07","Type":"ContainerStarted","Data":"dffc5be247b17efd55fb8c21b220fa0597ebd8d89b3861aa112838a410e425ef"} Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.595228 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.706395 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj4j4\" (UniqueName: \"kubernetes.io/projected/ba29815f-9b56-490a-ad00-b358c7328ec9-kube-api-access-kj4j4\") pod \"ba29815f-9b56-490a-ad00-b358c7328ec9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.706666 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-combined-ca-bundle\") pod \"ba29815f-9b56-490a-ad00-b358c7328ec9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.706754 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-scripts\") pod \"ba29815f-9b56-490a-ad00-b358c7328ec9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.706779 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-config-data\") pod \"ba29815f-9b56-490a-ad00-b358c7328ec9\" (UID: \"ba29815f-9b56-490a-ad00-b358c7328ec9\") " Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.711953 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba29815f-9b56-490a-ad00-b358c7328ec9-kube-api-access-kj4j4" (OuterVolumeSpecName: "kube-api-access-kj4j4") pod "ba29815f-9b56-490a-ad00-b358c7328ec9" (UID: "ba29815f-9b56-490a-ad00-b358c7328ec9"). InnerVolumeSpecName "kube-api-access-kj4j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.715109 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-scripts" (OuterVolumeSpecName: "scripts") pod "ba29815f-9b56-490a-ad00-b358c7328ec9" (UID: "ba29815f-9b56-490a-ad00-b358c7328ec9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.751865 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba29815f-9b56-490a-ad00-b358c7328ec9" (UID: "ba29815f-9b56-490a-ad00-b358c7328ec9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.780619 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-config-data" (OuterVolumeSpecName: "config-data") pod "ba29815f-9b56-490a-ad00-b358c7328ec9" (UID: "ba29815f-9b56-490a-ad00-b358c7328ec9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.809462 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.809493 4637 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.809505 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba29815f-9b56-490a-ad00-b358c7328ec9-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:16 crc kubenswrapper[4637]: I1201 15:07:16.809516 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj4j4\" (UniqueName: \"kubernetes.io/projected/ba29815f-9b56-490a-ad00-b358c7328ec9-kube-api-access-kj4j4\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.196767 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd466a3c-d503-4718-a059-1cba9c618b07","Type":"ContainerStarted","Data":"74ef198fd9356368a528909d448b30670752fcb34f48d9333385a17f4e2cbea1"} Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.197110 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd466a3c-d503-4718-a059-1cba9c618b07","Type":"ContainerStarted","Data":"85d41394a6b5de03b7f9efa5dc0c7fce9f8b65f58fdfe6c8237fe9630910ef3c"} Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.198495 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ccqz9" event={"ID":"ba29815f-9b56-490a-ad00-b358c7328ec9","Type":"ContainerDied","Data":"51495fac7906c2107de368094468b913b34b4705051735b17bc1dbb40a9810e8"} Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.198537 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51495fac7906c2107de368094468b913b34b4705051735b17bc1dbb40a9810e8" Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.198549 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ccqz9" Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.403049 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.403330 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-log" containerID="cri-o://215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9" gracePeriod=30 Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.403631 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-api" containerID="cri-o://4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217" gracePeriod=30 Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.414417 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.414695 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" containerName="nova-scheduler-scheduler" containerID="cri-o://404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc" gracePeriod=30 Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.443357 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.443589 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-log" containerID="cri-o://02d02f0c8778c4b352e5b5d88dee1b0d0ff2c5bbe6c79ce312c634c352cea4a0" gracePeriod=30 Dec 01 15:07:17 crc kubenswrapper[4637]: I1201 15:07:17.443695 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-metadata" containerID="cri-o://76cc11e37de29f42305478bcf2956f22822af5af2f5365d3144547b37933733f" gracePeriod=30 Dec 01 15:07:18 crc kubenswrapper[4637]: I1201 15:07:18.213524 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd466a3c-d503-4718-a059-1cba9c618b07","Type":"ContainerStarted","Data":"55733ad60256d27739c8118acacd7e1d1fa010475d7d52f09c28fdf750e8d5df"} Dec 01 15:07:18 crc kubenswrapper[4637]: I1201 15:07:18.218948 4637 generic.go:334] "Generic (PLEG): container finished" podID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerID="215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9" exitCode=143 Dec 01 15:07:18 crc kubenswrapper[4637]: I1201 15:07:18.219031 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8883a1af-5436-4fd3-bd2b-6594a0e74f9d","Type":"ContainerDied","Data":"215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9"} Dec 01 15:07:18 crc kubenswrapper[4637]: I1201 15:07:18.221571 4637 generic.go:334] "Generic (PLEG): container finished" podID="548020cf-ecfe-4200-b295-af12b510bd28" containerID="02d02f0c8778c4b352e5b5d88dee1b0d0ff2c5bbe6c79ce312c634c352cea4a0" exitCode=143 Dec 01 15:07:18 crc kubenswrapper[4637]: I1201 15:07:18.221614 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548020cf-ecfe-4200-b295-af12b510bd28","Type":"ContainerDied","Data":"02d02f0c8778c4b352e5b5d88dee1b0d0ff2c5bbe6c79ce312c634c352cea4a0"} Dec 01 15:07:19 crc kubenswrapper[4637]: E1201 15:07:19.756242 4637 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc is running failed: container process not found" containerID="404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 15:07:19 crc kubenswrapper[4637]: E1201 15:07:19.757139 4637 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc is running failed: container process not found" containerID="404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 15:07:19 crc kubenswrapper[4637]: E1201 15:07:19.757432 4637 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc is running failed: container process not found" containerID="404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 15:07:19 crc kubenswrapper[4637]: E1201 15:07:19.757465 4637 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" containerName="nova-scheduler-scheduler" Dec 01 15:07:19 crc kubenswrapper[4637]: I1201 15:07:19.832517 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.001608 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-combined-ca-bundle\") pod \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.001842 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-config-data\") pod \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.001958 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl4rj\" (UniqueName: \"kubernetes.io/projected/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-kube-api-access-zl4rj\") pod \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\" (UID: \"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc\") " Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.008434 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-kube-api-access-zl4rj" (OuterVolumeSpecName: "kube-api-access-zl4rj") pod "ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" (UID: "ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc"). InnerVolumeSpecName "kube-api-access-zl4rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.029836 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" (UID: "ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.039282 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-config-data" (OuterVolumeSpecName: "config-data") pod "ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" (UID: "ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.103522 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.103556 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl4rj\" (UniqueName: \"kubernetes.io/projected/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-kube-api-access-zl4rj\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.103566 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.241207 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd466a3c-d503-4718-a059-1cba9c618b07","Type":"ContainerStarted","Data":"648daa626c72d4869c493e292ff92697d314dc2e1e8d6ec998ba661326ebdec7"} Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.241336 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.242562 4637 generic.go:334] "Generic (PLEG): container finished" podID="ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" containerID="404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc" exitCode=0 Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.242600 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc","Type":"ContainerDied","Data":"404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc"} Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.242624 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc","Type":"ContainerDied","Data":"172baf1862ede075db71b5809407b5a95b5afcd2910e57a696dad61062943540"} Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.242654 4637 scope.go:117] "RemoveContainer" containerID="404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.242789 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.263470 4637 scope.go:117] "RemoveContainer" containerID="404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc" Dec 01 15:07:20 crc kubenswrapper[4637]: E1201 15:07:20.264280 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc\": container with ID starting with 404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc not found: ID does not exist" containerID="404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.264337 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc"} err="failed to get container status \"404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc\": rpc error: code = NotFound desc = could not find container \"404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc\": container with ID starting with 404946fe883125e8907a149cd4871efc8e32e97b00c3cc0e4cc47d33f99af6dc not found: ID does not exist" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.280391 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.193118574 podStartE2EDuration="6.280276321s" podCreationTimestamp="2025-12-01 15:07:14 +0000 UTC" firstStartedPulling="2025-12-01 15:07:15.428478551 +0000 UTC m=+1285.946187379" lastFinishedPulling="2025-12-01 15:07:19.515636298 +0000 UTC m=+1290.033345126" observedRunningTime="2025-12-01 15:07:20.266314453 +0000 UTC m=+1290.784023281" watchObservedRunningTime="2025-12-01 15:07:20.280276321 +0000 UTC m=+1290.797985149" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.305414 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.316947 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.325335 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:07:20 crc kubenswrapper[4637]: E1201 15:07:20.325851 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" containerName="nova-scheduler-scheduler" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.325866 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" containerName="nova-scheduler-scheduler" Dec 01 15:07:20 crc kubenswrapper[4637]: E1201 15:07:20.325904 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba29815f-9b56-490a-ad00-b358c7328ec9" containerName="nova-manage" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.325910 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba29815f-9b56-490a-ad00-b358c7328ec9" containerName="nova-manage" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.326175 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" containerName="nova-scheduler-scheduler" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.326195 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba29815f-9b56-490a-ad00-b358c7328ec9" containerName="nova-manage" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.326881 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.329811 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.334343 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.408790 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c2nn\" (UniqueName: \"kubernetes.io/projected/264d91ff-c64c-4d65-bedc-4a11945042f0-kube-api-access-5c2nn\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.409408 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264d91ff-c64c-4d65-bedc-4a11945042f0-config-data\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.409447 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264d91ff-c64c-4d65-bedc-4a11945042f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.511500 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264d91ff-c64c-4d65-bedc-4a11945042f0-config-data\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.511559 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264d91ff-c64c-4d65-bedc-4a11945042f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.511724 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c2nn\" (UniqueName: \"kubernetes.io/projected/264d91ff-c64c-4d65-bedc-4a11945042f0-kube-api-access-5c2nn\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.517753 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264d91ff-c64c-4d65-bedc-4a11945042f0-config-data\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.518246 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264d91ff-c64c-4d65-bedc-4a11945042f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.528974 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c2nn\" (UniqueName: \"kubernetes.io/projected/264d91ff-c64c-4d65-bedc-4a11945042f0-kube-api-access-5c2nn\") pod \"nova-scheduler-0\" (UID: \"264d91ff-c64c-4d65-bedc-4a11945042f0\") " pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.646903 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.904271 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:53226->10.217.0.196:8775: read: connection reset by peer" Dec 01 15:07:20 crc kubenswrapper[4637]: I1201 15:07:20.904271 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:53224->10.217.0.196:8775: read: connection reset by peer" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.192014 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 15:07:21 crc kubenswrapper[4637]: W1201 15:07:21.208113 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod264d91ff_c64c_4d65_bedc_4a11945042f0.slice/crio-822b180655099466f05f195ae4adb46eae463b9975c4cb5161e61b8177ff17bc WatchSource:0}: Error finding container 822b180655099466f05f195ae4adb46eae463b9975c4cb5161e61b8177ff17bc: Status 404 returned error can't find the container with id 822b180655099466f05f195ae4adb46eae463b9975c4cb5161e61b8177ff17bc Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.257211 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"264d91ff-c64c-4d65-bedc-4a11945042f0","Type":"ContainerStarted","Data":"822b180655099466f05f195ae4adb46eae463b9975c4cb5161e61b8177ff17bc"} Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.261542 4637 generic.go:334] "Generic (PLEG): container finished" podID="548020cf-ecfe-4200-b295-af12b510bd28" containerID="76cc11e37de29f42305478bcf2956f22822af5af2f5365d3144547b37933733f" exitCode=0 Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.262650 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548020cf-ecfe-4200-b295-af12b510bd28","Type":"ContainerDied","Data":"76cc11e37de29f42305478bcf2956f22822af5af2f5365d3144547b37933733f"} Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.282446 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.435952 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-config-data\") pod \"548020cf-ecfe-4200-b295-af12b510bd28\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.436401 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548020cf-ecfe-4200-b295-af12b510bd28-logs\") pod \"548020cf-ecfe-4200-b295-af12b510bd28\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.436473 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-combined-ca-bundle\") pod \"548020cf-ecfe-4200-b295-af12b510bd28\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.436531 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpmwt\" (UniqueName: \"kubernetes.io/projected/548020cf-ecfe-4200-b295-af12b510bd28-kube-api-access-bpmwt\") pod \"548020cf-ecfe-4200-b295-af12b510bd28\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.436579 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-nova-metadata-tls-certs\") pod \"548020cf-ecfe-4200-b295-af12b510bd28\" (UID: \"548020cf-ecfe-4200-b295-af12b510bd28\") " Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.438798 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548020cf-ecfe-4200-b295-af12b510bd28-logs" (OuterVolumeSpecName: "logs") pod "548020cf-ecfe-4200-b295-af12b510bd28" (UID: "548020cf-ecfe-4200-b295-af12b510bd28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.446426 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548020cf-ecfe-4200-b295-af12b510bd28-kube-api-access-bpmwt" (OuterVolumeSpecName: "kube-api-access-bpmwt") pod "548020cf-ecfe-4200-b295-af12b510bd28" (UID: "548020cf-ecfe-4200-b295-af12b510bd28"). InnerVolumeSpecName "kube-api-access-bpmwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.466166 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "548020cf-ecfe-4200-b295-af12b510bd28" (UID: "548020cf-ecfe-4200-b295-af12b510bd28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.503971 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-config-data" (OuterVolumeSpecName: "config-data") pod "548020cf-ecfe-4200-b295-af12b510bd28" (UID: "548020cf-ecfe-4200-b295-af12b510bd28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.508994 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "548020cf-ecfe-4200-b295-af12b510bd28" (UID: "548020cf-ecfe-4200-b295-af12b510bd28"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.539035 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548020cf-ecfe-4200-b295-af12b510bd28-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.539095 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.539118 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpmwt\" (UniqueName: \"kubernetes.io/projected/548020cf-ecfe-4200-b295-af12b510bd28-kube-api-access-bpmwt\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.539131 4637 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.539142 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548020cf-ecfe-4200-b295-af12b510bd28-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:21 crc kubenswrapper[4637]: I1201 15:07:21.802165 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc" path="/var/lib/kubelet/pods/ee3781ed-57e4-4baa-9dd6-872ac9d0f6dc/volumes" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.229255 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.297094 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548020cf-ecfe-4200-b295-af12b510bd28","Type":"ContainerDied","Data":"0364885f372aaadc8668c10405040ff454ca3322520ba6e3e0951ebc0d69087e"} Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.297151 4637 scope.go:117] "RemoveContainer" containerID="76cc11e37de29f42305478bcf2956f22822af5af2f5365d3144547b37933733f" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.297296 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.325585 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"264d91ff-c64c-4d65-bedc-4a11945042f0","Type":"ContainerStarted","Data":"7117b2129d92f48f59b295f111d7aef8de70c75d7b7bfd38a8f6b25f5054262c"} Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.340537 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.350783 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.351400 4637 generic.go:334] "Generic (PLEG): container finished" podID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerID="4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217" exitCode=0 Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.351429 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8883a1af-5436-4fd3-bd2b-6594a0e74f9d","Type":"ContainerDied","Data":"4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217"} Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.351451 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8883a1af-5436-4fd3-bd2b-6594a0e74f9d","Type":"ContainerDied","Data":"5b8e0e5805694246649d55cca4992abfd740f5e38316afa0c30542ccbf1f63de"} Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.351512 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.358441 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:07:22 crc kubenswrapper[4637]: E1201 15:07:22.358838 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-log" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.358850 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-log" Dec 01 15:07:22 crc kubenswrapper[4637]: E1201 15:07:22.358880 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-log" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.358886 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-log" Dec 01 15:07:22 crc kubenswrapper[4637]: E1201 15:07:22.358905 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-api" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.358911 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-api" Dec 01 15:07:22 crc kubenswrapper[4637]: E1201 15:07:22.358945 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-metadata" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.358952 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-metadata" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359152 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-logs\") pod \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359241 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-config-data\") pod \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359319 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-internal-tls-certs\") pod \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359356 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-api" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359367 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-public-tls-certs\") pod \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359385 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" containerName="nova-api-log" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359396 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-log" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359407 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="548020cf-ecfe-4200-b295-af12b510bd28" containerName="nova-metadata-metadata" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359408 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-combined-ca-bundle\") pod \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.359431 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6pn2\" (UniqueName: \"kubernetes.io/projected/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-kube-api-access-s6pn2\") pod \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\" (UID: \"8883a1af-5436-4fd3-bd2b-6594a0e74f9d\") " Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.360390 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.361529 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-logs" (OuterVolumeSpecName: "logs") pod "8883a1af-5436-4fd3-bd2b-6594a0e74f9d" (UID: "8883a1af-5436-4fd3-bd2b-6594a0e74f9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.363421 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.371465 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-kube-api-access-s6pn2" (OuterVolumeSpecName: "kube-api-access-s6pn2") pod "8883a1af-5436-4fd3-bd2b-6594a0e74f9d" (UID: "8883a1af-5436-4fd3-bd2b-6594a0e74f9d"). InnerVolumeSpecName "kube-api-access-s6pn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.372195 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.388081 4637 scope.go:117] "RemoveContainer" containerID="02d02f0c8778c4b352e5b5d88dee1b0d0ff2c5bbe6c79ce312c634c352cea4a0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.398753 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.398729561 podStartE2EDuration="2.398729561s" podCreationTimestamp="2025-12-01 15:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:07:22.37208912 +0000 UTC m=+1292.889797948" watchObservedRunningTime="2025-12-01 15:07:22.398729561 +0000 UTC m=+1292.916438389" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.437369 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.449204 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-config-data" (OuterVolumeSpecName: "config-data") pod "8883a1af-5436-4fd3-bd2b-6594a0e74f9d" (UID: "8883a1af-5436-4fd3-bd2b-6594a0e74f9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.464688 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.464792 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdq2g\" (UniqueName: \"kubernetes.io/projected/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-kube-api-access-kdq2g\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.464824 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-config-data\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.464907 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.464953 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-logs\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.465066 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6pn2\" (UniqueName: \"kubernetes.io/projected/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-kube-api-access-s6pn2\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.465083 4637 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-logs\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.465092 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.474643 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8883a1af-5436-4fd3-bd2b-6594a0e74f9d" (UID: "8883a1af-5436-4fd3-bd2b-6594a0e74f9d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.480046 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8883a1af-5436-4fd3-bd2b-6594a0e74f9d" (UID: "8883a1af-5436-4fd3-bd2b-6594a0e74f9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.501176 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8883a1af-5436-4fd3-bd2b-6594a0e74f9d" (UID: "8883a1af-5436-4fd3-bd2b-6594a0e74f9d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.566751 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.566859 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdq2g\" (UniqueName: \"kubernetes.io/projected/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-kube-api-access-kdq2g\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.566892 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-config-data\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.566964 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.566985 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-logs\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.567036 4637 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.567046 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.567056 4637 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8883a1af-5436-4fd3-bd2b-6594a0e74f9d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.567540 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-logs\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.571613 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.573376 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.581664 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-config-data\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.584062 4637 scope.go:117] "RemoveContainer" containerID="4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.584965 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdq2g\" (UniqueName: \"kubernetes.io/projected/0d7cca51-3d70-47ac-b1f9-ed181a1d8826-kube-api-access-kdq2g\") pod \"nova-metadata-0\" (UID: \"0d7cca51-3d70-47ac-b1f9-ed181a1d8826\") " pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.677464 4637 scope.go:117] "RemoveContainer" containerID="215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.706361 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.714107 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.717307 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.741990 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.743810 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.748386 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.748609 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.748732 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.755086 4637 scope.go:117] "RemoveContainer" containerID="4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217" Dec 01 15:07:22 crc kubenswrapper[4637]: E1201 15:07:22.758145 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217\": container with ID starting with 4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217 not found: ID does not exist" containerID="4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.758180 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217"} err="failed to get container status \"4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217\": rpc error: code = NotFound desc = could not find container \"4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217\": container with ID starting with 4f106d4801b2a6a15e117af79d94cab7a9897136f4abec9ab227a7919af13217 not found: ID does not exist" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.758204 4637 scope.go:117] "RemoveContainer" containerID="215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.759716 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:22 crc kubenswrapper[4637]: E1201 15:07:22.762403 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9\": container with ID starting with 215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9 not found: ID does not exist" containerID="215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.762433 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9"} err="failed to get container status \"215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9\": rpc error: code = NotFound desc = could not find container \"215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9\": container with ID starting with 215ea9cbd565f71fb5de5f7f075ea7201086415d7900e24d4ef6960460a919a9 not found: ID does not exist" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.878612 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrjt\" (UniqueName: \"kubernetes.io/projected/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-kube-api-access-zqrjt\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.879375 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.879578 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-config-data\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.879736 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.879944 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-public-tls-certs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.880122 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-logs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.984484 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-public-tls-certs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.984661 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-logs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.984776 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqrjt\" (UniqueName: \"kubernetes.io/projected/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-kube-api-access-zqrjt\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.984881 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.985053 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-config-data\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.985097 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.993338 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-config-data\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.993387 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-public-tls-certs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.997168 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:22 crc kubenswrapper[4637]: I1201 15:07:22.997513 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-logs\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:23 crc kubenswrapper[4637]: I1201 15:07:23.000387 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:23 crc kubenswrapper[4637]: I1201 15:07:23.010464 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqrjt\" (UniqueName: \"kubernetes.io/projected/1f8bf954-7268-4bcf-b75b-d4d4bfa26e11-kube-api-access-zqrjt\") pod \"nova-api-0\" (UID: \"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11\") " pod="openstack/nova-api-0" Dec 01 15:07:23 crc kubenswrapper[4637]: I1201 15:07:23.077381 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 15:07:23 crc kubenswrapper[4637]: I1201 15:07:23.272726 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 15:07:23 crc kubenswrapper[4637]: I1201 15:07:23.368814 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d7cca51-3d70-47ac-b1f9-ed181a1d8826","Type":"ContainerStarted","Data":"f976960d2930fdc5fe4ca7861191bf7dcd4b6c64f152ef93d540c2b1e145f7c5"} Dec 01 15:07:23 crc kubenswrapper[4637]: I1201 15:07:23.581592 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 15:07:23 crc kubenswrapper[4637]: I1201 15:07:23.786046 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548020cf-ecfe-4200-b295-af12b510bd28" path="/var/lib/kubelet/pods/548020cf-ecfe-4200-b295-af12b510bd28/volumes" Dec 01 15:07:23 crc kubenswrapper[4637]: I1201 15:07:23.787464 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8883a1af-5436-4fd3-bd2b-6594a0e74f9d" path="/var/lib/kubelet/pods/8883a1af-5436-4fd3-bd2b-6594a0e74f9d/volumes" Dec 01 15:07:24 crc kubenswrapper[4637]: I1201 15:07:24.378331 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11","Type":"ContainerStarted","Data":"56e044ebd73c56942d741b89104942773e71322888c8a8cf376be0e4f08036c0"} Dec 01 15:07:24 crc kubenswrapper[4637]: I1201 15:07:24.378400 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11","Type":"ContainerStarted","Data":"57d1a8e8e491db98b1eceb6e290d47808bb31a780dd00a64c61ae0d4969946c1"} Dec 01 15:07:24 crc kubenswrapper[4637]: I1201 15:07:24.378433 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f8bf954-7268-4bcf-b75b-d4d4bfa26e11","Type":"ContainerStarted","Data":"ca2217ce2b9b755e988cfc80c4fdaa950008c3d2f01a4f5d2147fb24f69191c6"} Dec 01 15:07:24 crc kubenswrapper[4637]: I1201 15:07:24.383882 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d7cca51-3d70-47ac-b1f9-ed181a1d8826","Type":"ContainerStarted","Data":"8ceaaa18ff3d7db07ad22791da6acb322d6a3967ff084267068339737f6f9e48"} Dec 01 15:07:24 crc kubenswrapper[4637]: I1201 15:07:24.384010 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d7cca51-3d70-47ac-b1f9-ed181a1d8826","Type":"ContainerStarted","Data":"6ffee7f8299d27bce636d68517c0cdc4e12b2f2abd56132ee5434f8f3ec9bd5e"} Dec 01 15:07:24 crc kubenswrapper[4637]: I1201 15:07:24.419360 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.419334633 podStartE2EDuration="2.419334633s" podCreationTimestamp="2025-12-01 15:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:07:24.399543558 +0000 UTC m=+1294.917252386" watchObservedRunningTime="2025-12-01 15:07:24.419334633 +0000 UTC m=+1294.937043461" Dec 01 15:07:24 crc kubenswrapper[4637]: I1201 15:07:24.455588 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.455568874 podStartE2EDuration="2.455568874s" podCreationTimestamp="2025-12-01 15:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:07:24.437409953 +0000 UTC m=+1294.955118781" watchObservedRunningTime="2025-12-01 15:07:24.455568874 +0000 UTC m=+1294.973277702" Dec 01 15:07:25 crc kubenswrapper[4637]: I1201 15:07:25.647757 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 15:07:27 crc kubenswrapper[4637]: I1201 15:07:27.718414 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 15:07:27 crc kubenswrapper[4637]: I1201 15:07:27.718774 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 15:07:30 crc kubenswrapper[4637]: I1201 15:07:30.647592 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 15:07:30 crc kubenswrapper[4637]: I1201 15:07:30.674762 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 15:07:31 crc kubenswrapper[4637]: I1201 15:07:31.499322 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 15:07:32 crc kubenswrapper[4637]: I1201 15:07:32.718474 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 15:07:32 crc kubenswrapper[4637]: I1201 15:07:32.718853 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 15:07:33 crc kubenswrapper[4637]: I1201 15:07:33.078527 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 15:07:33 crc kubenswrapper[4637]: I1201 15:07:33.080192 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 15:07:33 crc kubenswrapper[4637]: I1201 15:07:33.730240 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0d7cca51-3d70-47ac-b1f9-ed181a1d8826" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:07:33 crc kubenswrapper[4637]: I1201 15:07:33.730271 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0d7cca51-3d70-47ac-b1f9-ed181a1d8826" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:07:34 crc kubenswrapper[4637]: I1201 15:07:34.095144 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1f8bf954-7268-4bcf-b75b-d4d4bfa26e11" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:07:34 crc kubenswrapper[4637]: I1201 15:07:34.095145 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1f8bf954-7268-4bcf-b75b-d4d4bfa26e11" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 15:07:42 crc kubenswrapper[4637]: I1201 15:07:42.725201 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 15:07:42 crc kubenswrapper[4637]: I1201 15:07:42.726359 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 15:07:42 crc kubenswrapper[4637]: I1201 15:07:42.733002 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 15:07:43 crc kubenswrapper[4637]: I1201 15:07:43.087323 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 15:07:43 crc kubenswrapper[4637]: I1201 15:07:43.088298 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 15:07:43 crc kubenswrapper[4637]: I1201 15:07:43.094816 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 15:07:43 crc kubenswrapper[4637]: I1201 15:07:43.101389 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 15:07:43 crc kubenswrapper[4637]: I1201 15:07:43.578980 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 15:07:43 crc kubenswrapper[4637]: I1201 15:07:43.583442 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 15:07:43 crc kubenswrapper[4637]: I1201 15:07:43.588163 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 15:07:44 crc kubenswrapper[4637]: I1201 15:07:44.711267 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 15:07:45 crc kubenswrapper[4637]: I1201 15:07:45.614063 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:07:45 crc kubenswrapper[4637]: I1201 15:07:45.614569 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:07:54 crc kubenswrapper[4637]: I1201 15:07:54.305904 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:07:55 crc kubenswrapper[4637]: I1201 15:07:55.107846 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:07:59 crc kubenswrapper[4637]: I1201 15:07:59.937688 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerName="rabbitmq" containerID="cri-o://ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1" gracePeriod=604795 Dec 01 15:08:01 crc kubenswrapper[4637]: I1201 15:08:01.005941 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerName="rabbitmq" containerID="cri-o://ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5" gracePeriod=604795 Dec 01 15:08:01 crc kubenswrapper[4637]: I1201 15:08:01.258152 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Dec 01 15:08:01 crc kubenswrapper[4637]: I1201 15:08:01.960213 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.576751 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.599158 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8eeaa55a-2c35-480c-baec-134ef1158e66-erlang-cookie-secret\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.599256 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.599296 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-tls\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.599436 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-erlang-cookie\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.599569 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-plugins\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.599630 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-config-data\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.599682 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-plugins-conf\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.600003 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8eeaa55a-2c35-480c-baec-134ef1158e66-pod-info\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.600033 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-server-conf\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.600097 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-confd\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.600147 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgmpl\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-kube-api-access-rgmpl\") pod \"8eeaa55a-2c35-480c-baec-134ef1158e66\" (UID: \"8eeaa55a-2c35-480c-baec-134ef1158e66\") " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.600554 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.600755 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.601208 4637 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.601232 4637 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.601384 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.609880 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8eeaa55a-2c35-480c-baec-134ef1158e66-pod-info" (OuterVolumeSpecName: "pod-info") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.619013 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.619206 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeaa55a-2c35-480c-baec-134ef1158e66-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.619347 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-kube-api-access-rgmpl" (OuterVolumeSpecName: "kube-api-access-rgmpl") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "kube-api-access-rgmpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.636131 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.664413 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-config-data" (OuterVolumeSpecName: "config-data") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.706260 4637 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8eeaa55a-2c35-480c-baec-134ef1158e66-pod-info\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.706302 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgmpl\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-kube-api-access-rgmpl\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.706315 4637 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8eeaa55a-2c35-480c-baec-134ef1158e66-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.706344 4637 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.706354 4637 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.706362 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.706370 4637 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.715781 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-server-conf" (OuterVolumeSpecName: "server-conf") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.764562 4637 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.808123 4637 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.808159 4637 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8eeaa55a-2c35-480c-baec-134ef1158e66-server-conf\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.906378 4637 generic.go:334] "Generic (PLEG): container finished" podID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerID="ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1" exitCode=0 Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.906417 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8eeaa55a-2c35-480c-baec-134ef1158e66","Type":"ContainerDied","Data":"ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1"} Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.906445 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8eeaa55a-2c35-480c-baec-134ef1158e66","Type":"ContainerDied","Data":"9dc69a07152fc7b789ce5c18528ee0c387da8530c53f963f9fa9c1e1e7b733ee"} Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.906462 4637 scope.go:117] "RemoveContainer" containerID="ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.906628 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.917307 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8eeaa55a-2c35-480c-baec-134ef1158e66" (UID: "8eeaa55a-2c35-480c-baec-134ef1158e66"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.919678 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-szjk2"] Dec 01 15:08:06 crc kubenswrapper[4637]: E1201 15:08:06.920237 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerName="rabbitmq" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.920317 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerName="rabbitmq" Dec 01 15:08:06 crc kubenswrapper[4637]: E1201 15:08:06.920388 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerName="setup-container" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.920464 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerName="setup-container" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.920698 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" containerName="rabbitmq" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.935687 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.947381 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 01 15:08:06 crc kubenswrapper[4637]: I1201 15:08:06.956504 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-szjk2"] Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.018314 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.018671 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.018796 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-svc\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.018944 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-config\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.019044 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.019141 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mdq\" (UniqueName: \"kubernetes.io/projected/0e8faf04-5caa-4ba4-8100-866d7c7182f8-kube-api-access-k7mdq\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.019252 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.019452 4637 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8eeaa55a-2c35-480c-baec-134ef1158e66-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.084183 4637 scope.go:117] "RemoveContainer" containerID="8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.124109 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-svc\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.124205 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-config\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.124425 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.124455 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7mdq\" (UniqueName: \"kubernetes.io/projected/0e8faf04-5caa-4ba4-8100-866d7c7182f8-kube-api-access-k7mdq\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.124478 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.124543 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.124947 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.125271 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-svc\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.126144 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.126446 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.126501 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.126769 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-config\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.127755 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.132414 4637 scope.go:117] "RemoveContainer" containerID="ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1" Dec 01 15:08:07 crc kubenswrapper[4637]: E1201 15:08:07.138075 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1\": container with ID starting with ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1 not found: ID does not exist" containerID="ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.138130 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1"} err="failed to get container status \"ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1\": rpc error: code = NotFound desc = could not find container \"ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1\": container with ID starting with ab164b2fba6dd21a8e04001212e42fc0a476ea075a162b09badea48b34d9ead1 not found: ID does not exist" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.138160 4637 scope.go:117] "RemoveContainer" containerID="8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf" Dec 01 15:08:07 crc kubenswrapper[4637]: E1201 15:08:07.141043 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf\": container with ID starting with 8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf not found: ID does not exist" containerID="8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.141096 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf"} err="failed to get container status \"8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf\": rpc error: code = NotFound desc = could not find container \"8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf\": container with ID starting with 8d67755d90b536e7b72345ed9bed90290fe5733e7d1987be7c0301d882841ddf not found: ID does not exist" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.156299 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7mdq\" (UniqueName: \"kubernetes.io/projected/0e8faf04-5caa-4ba4-8100-866d7c7182f8-kube-api-access-k7mdq\") pod \"dnsmasq-dns-5576978c7c-szjk2\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.280319 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.333039 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.365298 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.391769 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.393759 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.404735 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.405828 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.405995 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.406153 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-w98tp" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.406312 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.406458 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.407343 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.490256 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.536716 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/966262a4-bd2b-40fd-b052-ce2bd68485b5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537353 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537382 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537435 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537483 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537525 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xvm6\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-kube-api-access-4xvm6\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537555 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537579 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/966262a4-bd2b-40fd-b052-ce2bd68485b5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537609 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537650 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.537671 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-config-data\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642278 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642342 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642385 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xvm6\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-kube-api-access-4xvm6\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642417 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642442 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/966262a4-bd2b-40fd-b052-ce2bd68485b5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642466 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642509 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642532 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-config-data\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642561 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/966262a4-bd2b-40fd-b052-ce2bd68485b5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642602 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.642618 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.643571 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.644859 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.647234 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.652955 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-config-data\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.653250 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.666760 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/966262a4-bd2b-40fd-b052-ce2bd68485b5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.683483 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.725948 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/966262a4-bd2b-40fd-b052-ce2bd68485b5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.753023 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xvm6\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-kube-api-access-4xvm6\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.753474 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/966262a4-bd2b-40fd-b052-ce2bd68485b5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.765181 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/966262a4-bd2b-40fd-b052-ce2bd68485b5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.813184 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eeaa55a-2c35-480c-baec-134ef1158e66" path="/var/lib/kubelet/pods/8eeaa55a-2c35-480c-baec-134ef1158e66/volumes" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.834760 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"966262a4-bd2b-40fd-b052-ce2bd68485b5\") " pod="openstack/rabbitmq-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.935120 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.948053 4637 generic.go:334] "Generic (PLEG): container finished" podID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerID="ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5" exitCode=0 Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.948149 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bee806ff-8bec-49d0-a47f-bfd8edbb36fb","Type":"ContainerDied","Data":"ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5"} Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.948231 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bee806ff-8bec-49d0-a47f-bfd8edbb36fb","Type":"ContainerDied","Data":"657d3c1ef5c2bdbec8c3233bd82db3da47270b0ef337da438738022f29d78689"} Dec 01 15:08:07 crc kubenswrapper[4637]: I1201 15:08:07.948258 4637 scope.go:117] "RemoveContainer" containerID="ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.016531 4637 scope.go:117] "RemoveContainer" containerID="af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.043458 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067279 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-plugins\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067332 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-tls\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067365 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-pod-info\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067449 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-plugins-conf\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067482 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-erlang-cookie\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067566 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-erlang-cookie-secret\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067594 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-server-conf\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067636 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fcxg\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-kube-api-access-2fcxg\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067719 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-confd\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067746 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-config-data\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.067822 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\" (UID: \"bee806ff-8bec-49d0-a47f-bfd8edbb36fb\") " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.072719 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.074393 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.075392 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.088328 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.093457 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.097546 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.111604 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-kube-api-access-2fcxg" (OuterVolumeSpecName: "kube-api-access-2fcxg") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "kube-api-access-2fcxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.115258 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-pod-info" (OuterVolumeSpecName: "pod-info") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.133450 4637 scope.go:117] "RemoveContainer" containerID="ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5" Dec 01 15:08:08 crc kubenswrapper[4637]: E1201 15:08:08.137119 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5\": container with ID starting with ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5 not found: ID does not exist" containerID="ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.137273 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5"} err="failed to get container status \"ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5\": rpc error: code = NotFound desc = could not find container \"ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5\": container with ID starting with ec423fcce8896ba5554459a5c4618d8fd2a6994e18d0be9e0b13a2d9ce8058e5 not found: ID does not exist" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.137389 4637 scope.go:117] "RemoveContainer" containerID="af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f" Dec 01 15:08:08 crc kubenswrapper[4637]: E1201 15:08:08.139907 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f\": container with ID starting with af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f not found: ID does not exist" containerID="af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.140055 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f"} err="failed to get container status \"af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f\": rpc error: code = NotFound desc = could not find container \"af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f\": container with ID starting with af0da3d91d099753d92b445ebb956eb6402c743e3a45b0e00e766e3bd331a51f not found: ID does not exist" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.170164 4637 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.170198 4637 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.170342 4637 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.170365 4637 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-pod-info\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.170373 4637 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.170381 4637 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.170389 4637 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.170398 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fcxg\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-kube-api-access-2fcxg\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.211405 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-config-data" (OuterVolumeSpecName: "config-data") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.218509 4637 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.283637 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.283680 4637 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.316151 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-server-conf" (OuterVolumeSpecName: "server-conf") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.389618 4637 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-server-conf\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.421261 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-szjk2"] Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.481269 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bee806ff-8bec-49d0-a47f-bfd8edbb36fb" (UID: "bee806ff-8bec-49d0-a47f-bfd8edbb36fb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.503599 4637 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bee806ff-8bec-49d0-a47f-bfd8edbb36fb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.812516 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 15:08:08 crc kubenswrapper[4637]: W1201 15:08:08.823093 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod966262a4_bd2b_40fd_b052_ce2bd68485b5.slice/crio-12577916213c9ba5bd5eea42b84226fe2b9b77f6c3988525f7e70540d3fdddfd WatchSource:0}: Error finding container 12577916213c9ba5bd5eea42b84226fe2b9b77f6c3988525f7e70540d3fdddfd: Status 404 returned error can't find the container with id 12577916213c9ba5bd5eea42b84226fe2b9b77f6c3988525f7e70540d3fdddfd Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.973080 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.975666 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"966262a4-bd2b-40fd-b052-ce2bd68485b5","Type":"ContainerStarted","Data":"12577916213c9ba5bd5eea42b84226fe2b9b77f6c3988525f7e70540d3fdddfd"} Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.978273 4637 generic.go:334] "Generic (PLEG): container finished" podID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" containerID="ad1023a7b704ba11d62b7f0d8bd7b0ec13201e392982fcfedd39a4ac771210da" exitCode=0 Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.978303 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" event={"ID":"0e8faf04-5caa-4ba4-8100-866d7c7182f8","Type":"ContainerDied","Data":"ad1023a7b704ba11d62b7f0d8bd7b0ec13201e392982fcfedd39a4ac771210da"} Dec 01 15:08:08 crc kubenswrapper[4637]: I1201 15:08:08.978322 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" event={"ID":"0e8faf04-5caa-4ba4-8100-866d7c7182f8","Type":"ContainerStarted","Data":"a0cf83580fcc432b2f05292b532b705344ef05a1b84ed6769e13da990bb96fed"} Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.043859 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.062794 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.082061 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:08:09 crc kubenswrapper[4637]: E1201 15:08:09.082513 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerName="setup-container" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.082534 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerName="setup-container" Dec 01 15:08:09 crc kubenswrapper[4637]: E1201 15:08:09.082561 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerName="rabbitmq" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.082566 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerName="rabbitmq" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.082887 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" containerName="rabbitmq" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.090421 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.093615 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.093995 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.094277 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.094381 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.094549 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gh8nl" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.094494 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.098489 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.120082 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.218011 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.218310 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64730b89-aa49-4741-b050-c283d98626c9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.218442 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64730b89-aa49-4741-b050-c283d98626c9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.218569 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.218651 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.218723 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.218808 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2r8\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-kube-api-access-ln2r8\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.218886 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.219147 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.219247 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.219348 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.322587 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64730b89-aa49-4741-b050-c283d98626c9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.322743 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64730b89-aa49-4741-b050-c283d98626c9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.322844 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.322870 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.323386 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.323438 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2r8\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-kube-api-access-ln2r8\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.323472 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.323613 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.323690 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.323722 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.323807 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.324232 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.323276 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.325081 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.327814 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.329545 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.330465 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.330854 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64730b89-aa49-4741-b050-c283d98626c9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.336880 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64730b89-aa49-4741-b050-c283d98626c9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.343125 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64730b89-aa49-4741-b050-c283d98626c9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.345371 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.352994 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2r8\" (UniqueName: \"kubernetes.io/projected/64730b89-aa49-4741-b050-c283d98626c9-kube-api-access-ln2r8\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.367327 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"64730b89-aa49-4741-b050-c283d98626c9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.422332 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.764305 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.794442 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee806ff-8bec-49d0-a47f-bfd8edbb36fb" path="/var/lib/kubelet/pods/bee806ff-8bec-49d0-a47f-bfd8edbb36fb/volumes" Dec 01 15:08:09 crc kubenswrapper[4637]: I1201 15:08:09.998553 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"64730b89-aa49-4741-b050-c283d98626c9","Type":"ContainerStarted","Data":"d477c62fa6b223866c46d9bfe23bbd775a6018c602bc7a2bae232170c1ee9fc3"} Dec 01 15:08:10 crc kubenswrapper[4637]: I1201 15:08:10.002516 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" event={"ID":"0e8faf04-5caa-4ba4-8100-866d7c7182f8","Type":"ContainerStarted","Data":"48a22eff6a49269b8f3d257bb647b07fe49343277ada0a807884ec136ef8f928"} Dec 01 15:08:10 crc kubenswrapper[4637]: I1201 15:08:10.003913 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:10 crc kubenswrapper[4637]: I1201 15:08:10.036781 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" podStartSLOduration=4.036757175 podStartE2EDuration="4.036757175s" podCreationTimestamp="2025-12-01 15:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:08:10.035211684 +0000 UTC m=+1340.552920512" watchObservedRunningTime="2025-12-01 15:08:10.036757175 +0000 UTC m=+1340.554466003" Dec 01 15:08:11 crc kubenswrapper[4637]: I1201 15:08:11.066690 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"966262a4-bd2b-40fd-b052-ce2bd68485b5","Type":"ContainerStarted","Data":"371beb30866ed7293e100d1ad4d9e091eb96c9cea6e5c520df30c6edd95f600f"} Dec 01 15:08:12 crc kubenswrapper[4637]: I1201 15:08:12.078402 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"64730b89-aa49-4741-b050-c283d98626c9","Type":"ContainerStarted","Data":"685084ae71b3bfc3e720a133bf44d85cada71a9b3a7aacdeb9818b0c538d760e"} Dec 01 15:08:15 crc kubenswrapper[4637]: I1201 15:08:15.613308 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:08:15 crc kubenswrapper[4637]: I1201 15:08:15.614238 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.282089 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.361724 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-tn4rb"] Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.361965 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" podUID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" containerName="dnsmasq-dns" containerID="cri-o://f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd" gracePeriod=10 Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.583307 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fb7f8d4c-mcw8w"] Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.585747 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.597184 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fb7f8d4c-mcw8w"] Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.640442 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-dns-svc\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.640841 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-ovsdbserver-nb\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.640886 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-config\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.640920 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-dns-swift-storage-0\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.643145 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-openstack-edpm-ipam\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.643200 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-ovsdbserver-sb\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.643255 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d76kt\" (UniqueName: \"kubernetes.io/projected/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-kube-api-access-d76kt\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.745087 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-ovsdbserver-sb\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.745158 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d76kt\" (UniqueName: \"kubernetes.io/projected/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-kube-api-access-d76kt\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.745211 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-dns-svc\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.745233 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-ovsdbserver-nb\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.745287 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-config\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.745339 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-dns-swift-storage-0\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.745407 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-openstack-edpm-ipam\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.746496 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-openstack-edpm-ipam\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.746514 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-ovsdbserver-sb\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.746876 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-dns-svc\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.747015 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-ovsdbserver-nb\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.747210 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-config\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.747526 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-dns-swift-storage-0\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.791192 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d76kt\" (UniqueName: \"kubernetes.io/projected/0e879137-dbe4-4b26-a4bc-21cd963dc5e9-kube-api-access-d76kt\") pod \"dnsmasq-dns-55fb7f8d4c-mcw8w\" (UID: \"0e879137-dbe4-4b26-a4bc-21cd963dc5e9\") " pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:17 crc kubenswrapper[4637]: I1201 15:08:17.922587 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.041202 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.154320 4637 generic.go:334] "Generic (PLEG): container finished" podID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" containerID="f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd" exitCode=0 Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.154391 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" event={"ID":"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc","Type":"ContainerDied","Data":"f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd"} Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.154664 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" event={"ID":"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc","Type":"ContainerDied","Data":"71609ba6ca05fc10e3c50e5bc0d163c3ea332538278cb6330173973e3b0dc6f4"} Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.154691 4637 scope.go:117] "RemoveContainer" containerID="f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.154520 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-tn4rb" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.163517 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-sb\") pod \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.163579 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-swift-storage-0\") pod \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.163709 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-svc\") pod \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.163751 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gmk6\" (UniqueName: \"kubernetes.io/projected/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-kube-api-access-9gmk6\") pod \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.163957 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-nb\") pod \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.163983 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-config\") pod \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\" (UID: \"be11f54b-290b-4bd3-9f5f-8a2f78d49bcc\") " Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.195174 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-kube-api-access-9gmk6" (OuterVolumeSpecName: "kube-api-access-9gmk6") pod "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" (UID: "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc"). InnerVolumeSpecName "kube-api-access-9gmk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.205449 4637 scope.go:117] "RemoveContainer" containerID="6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.257693 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" (UID: "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.265389 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-config" (OuterVolumeSpecName: "config") pod "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" (UID: "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.268476 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" (UID: "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.269686 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.269721 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.269731 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.269742 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gmk6\" (UniqueName: \"kubernetes.io/projected/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-kube-api-access-9gmk6\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.303335 4637 scope.go:117] "RemoveContainer" containerID="f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.318515 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fb7f8d4c-mcw8w"] Dec 01 15:08:18 crc kubenswrapper[4637]: E1201 15:08:18.337403 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd\": container with ID starting with f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd not found: ID does not exist" containerID="f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.337466 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd"} err="failed to get container status \"f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd\": rpc error: code = NotFound desc = could not find container \"f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd\": container with ID starting with f4f47fd398b510758b89506b0e971eae6fc87acf27cb3f5d6634c984c891a1cd not found: ID does not exist" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.337751 4637 scope.go:117] "RemoveContainer" containerID="6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a" Dec 01 15:08:18 crc kubenswrapper[4637]: E1201 15:08:18.340982 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a\": container with ID starting with 6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a not found: ID does not exist" containerID="6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.341066 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a"} err="failed to get container status \"6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a\": rpc error: code = NotFound desc = could not find container \"6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a\": container with ID starting with 6a424b3142b314bdfa051ab113a6b8c79ed22f5dc77b520e7ee1cb690729eb9a not found: ID does not exist" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.351469 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" (UID: "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.371517 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.383775 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" (UID: "be11f54b-290b-4bd3-9f5f-8a2f78d49bcc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.473748 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.503991 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-tn4rb"] Dec 01 15:08:18 crc kubenswrapper[4637]: I1201 15:08:18.512465 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-tn4rb"] Dec 01 15:08:19 crc kubenswrapper[4637]: I1201 15:08:19.166767 4637 generic.go:334] "Generic (PLEG): container finished" podID="0e879137-dbe4-4b26-a4bc-21cd963dc5e9" containerID="75507934b0074d92953c89da3391166e80b57a1cac1366504e6ba86bb2f3eb5c" exitCode=0 Dec 01 15:08:19 crc kubenswrapper[4637]: I1201 15:08:19.166832 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" event={"ID":"0e879137-dbe4-4b26-a4bc-21cd963dc5e9","Type":"ContainerDied","Data":"75507934b0074d92953c89da3391166e80b57a1cac1366504e6ba86bb2f3eb5c"} Dec 01 15:08:19 crc kubenswrapper[4637]: I1201 15:08:19.166870 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" event={"ID":"0e879137-dbe4-4b26-a4bc-21cd963dc5e9","Type":"ContainerStarted","Data":"5064d19afbd1994d565bc85793157843b91c124eaeb953df2ab5075bf808bb9d"} Dec 01 15:08:19 crc kubenswrapper[4637]: I1201 15:08:19.783681 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" path="/var/lib/kubelet/pods/be11f54b-290b-4bd3-9f5f-8a2f78d49bcc/volumes" Dec 01 15:08:20 crc kubenswrapper[4637]: I1201 15:08:20.178100 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" event={"ID":"0e879137-dbe4-4b26-a4bc-21cd963dc5e9","Type":"ContainerStarted","Data":"b4ac3aa5400985cd2cc87e2bcc80c290237eaed6c4f0933ac4ec5f632685435d"} Dec 01 15:08:20 crc kubenswrapper[4637]: I1201 15:08:20.178235 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:20 crc kubenswrapper[4637]: I1201 15:08:20.205985 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" podStartSLOduration=3.205966317 podStartE2EDuration="3.205966317s" podCreationTimestamp="2025-12-01 15:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:08:20.20311057 +0000 UTC m=+1350.720819398" watchObservedRunningTime="2025-12-01 15:08:20.205966317 +0000 UTC m=+1350.723675145" Dec 01 15:08:27 crc kubenswrapper[4637]: I1201 15:08:27.924186 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55fb7f8d4c-mcw8w" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.018315 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-szjk2"] Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.018692 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" podUID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" containerName="dnsmasq-dns" containerID="cri-o://48a22eff6a49269b8f3d257bb647b07fe49343277ada0a807884ec136ef8f928" gracePeriod=10 Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.262001 4637 generic.go:334] "Generic (PLEG): container finished" podID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" containerID="48a22eff6a49269b8f3d257bb647b07fe49343277ada0a807884ec136ef8f928" exitCode=0 Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.262225 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" event={"ID":"0e8faf04-5caa-4ba4-8100-866d7c7182f8","Type":"ContainerDied","Data":"48a22eff6a49269b8f3d257bb647b07fe49343277ada0a807884ec136ef8f928"} Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.595082 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.691645 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-nb\") pod \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.692215 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-openstack-edpm-ipam\") pod \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.692322 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7mdq\" (UniqueName: \"kubernetes.io/projected/0e8faf04-5caa-4ba4-8100-866d7c7182f8-kube-api-access-k7mdq\") pod \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.692361 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-svc\") pod \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.692425 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-config\") pod \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.692588 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-sb\") pod \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.692684 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-swift-storage-0\") pod \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\" (UID: \"0e8faf04-5caa-4ba4-8100-866d7c7182f8\") " Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.738242 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e8faf04-5caa-4ba4-8100-866d7c7182f8-kube-api-access-k7mdq" (OuterVolumeSpecName: "kube-api-access-k7mdq") pod "0e8faf04-5caa-4ba4-8100-866d7c7182f8" (UID: "0e8faf04-5caa-4ba4-8100-866d7c7182f8"). InnerVolumeSpecName "kube-api-access-k7mdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.747451 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0e8faf04-5caa-4ba4-8100-866d7c7182f8" (UID: "0e8faf04-5caa-4ba4-8100-866d7c7182f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.764515 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-config" (OuterVolumeSpecName: "config") pod "0e8faf04-5caa-4ba4-8100-866d7c7182f8" (UID: "0e8faf04-5caa-4ba4-8100-866d7c7182f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.765298 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e8faf04-5caa-4ba4-8100-866d7c7182f8" (UID: "0e8faf04-5caa-4ba4-8100-866d7c7182f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.766510 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "0e8faf04-5caa-4ba4-8100-866d7c7182f8" (UID: "0e8faf04-5caa-4ba4-8100-866d7c7182f8"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.776514 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e8faf04-5caa-4ba4-8100-866d7c7182f8" (UID: "0e8faf04-5caa-4ba4-8100-866d7c7182f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.788543 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e8faf04-5caa-4ba4-8100-866d7c7182f8" (UID: "0e8faf04-5caa-4ba4-8100-866d7c7182f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.800459 4637 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.800493 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.800504 4637 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.800513 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7mdq\" (UniqueName: \"kubernetes.io/projected/0e8faf04-5caa-4ba4-8100-866d7c7182f8-kube-api-access-k7mdq\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.800526 4637 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.800537 4637 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:28 crc kubenswrapper[4637]: I1201 15:08:28.800545 4637 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e8faf04-5caa-4ba4-8100-866d7c7182f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 15:08:29 crc kubenswrapper[4637]: I1201 15:08:29.275391 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" event={"ID":"0e8faf04-5caa-4ba4-8100-866d7c7182f8","Type":"ContainerDied","Data":"a0cf83580fcc432b2f05292b532b705344ef05a1b84ed6769e13da990bb96fed"} Dec 01 15:08:29 crc kubenswrapper[4637]: I1201 15:08:29.275445 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-szjk2" Dec 01 15:08:29 crc kubenswrapper[4637]: I1201 15:08:29.275474 4637 scope.go:117] "RemoveContainer" containerID="48a22eff6a49269b8f3d257bb647b07fe49343277ada0a807884ec136ef8f928" Dec 01 15:08:29 crc kubenswrapper[4637]: I1201 15:08:29.314108 4637 scope.go:117] "RemoveContainer" containerID="ad1023a7b704ba11d62b7f0d8bd7b0ec13201e392982fcfedd39a4ac771210da" Dec 01 15:08:29 crc kubenswrapper[4637]: I1201 15:08:29.325366 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-szjk2"] Dec 01 15:08:29 crc kubenswrapper[4637]: I1201 15:08:29.334478 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-szjk2"] Dec 01 15:08:29 crc kubenswrapper[4637]: I1201 15:08:29.790923 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" path="/var/lib/kubelet/pods/0e8faf04-5caa-4ba4-8100-866d7c7182f8/volumes" Dec 01 15:08:31 crc kubenswrapper[4637]: I1201 15:08:31.853699 4637 scope.go:117] "RemoveContainer" containerID="c01e34ee19b6ccfceeeb1bb9b60609966c84c1124dac3600bb258bdfb2b8cfd3" Dec 01 15:08:43 crc kubenswrapper[4637]: I1201 15:08:43.440193 4637 generic.go:334] "Generic (PLEG): container finished" podID="966262a4-bd2b-40fd-b052-ce2bd68485b5" containerID="371beb30866ed7293e100d1ad4d9e091eb96c9cea6e5c520df30c6edd95f600f" exitCode=0 Dec 01 15:08:43 crc kubenswrapper[4637]: I1201 15:08:43.440394 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"966262a4-bd2b-40fd-b052-ce2bd68485b5","Type":"ContainerDied","Data":"371beb30866ed7293e100d1ad4d9e091eb96c9cea6e5c520df30c6edd95f600f"} Dec 01 15:08:44 crc kubenswrapper[4637]: I1201 15:08:44.451768 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"966262a4-bd2b-40fd-b052-ce2bd68485b5","Type":"ContainerStarted","Data":"c697694a8fb58adf92f46cd7bebbfb1045cbcbbcdee2b6f8d5bbb87b0e5ac179"} Dec 01 15:08:44 crc kubenswrapper[4637]: I1201 15:08:44.452388 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 01 15:08:44 crc kubenswrapper[4637]: I1201 15:08:44.453535 4637 generic.go:334] "Generic (PLEG): container finished" podID="64730b89-aa49-4741-b050-c283d98626c9" containerID="685084ae71b3bfc3e720a133bf44d85cada71a9b3a7aacdeb9818b0c538d760e" exitCode=0 Dec 01 15:08:44 crc kubenswrapper[4637]: I1201 15:08:44.453572 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"64730b89-aa49-4741-b050-c283d98626c9","Type":"ContainerDied","Data":"685084ae71b3bfc3e720a133bf44d85cada71a9b3a7aacdeb9818b0c538d760e"} Dec 01 15:08:44 crc kubenswrapper[4637]: I1201 15:08:44.529384 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.529363084 podStartE2EDuration="37.529363084s" podCreationTimestamp="2025-12-01 15:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:08:44.484697996 +0000 UTC m=+1375.002406844" watchObservedRunningTime="2025-12-01 15:08:44.529363084 +0000 UTC m=+1375.047071912" Dec 01 15:08:45 crc kubenswrapper[4637]: I1201 15:08:45.474245 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"64730b89-aa49-4741-b050-c283d98626c9","Type":"ContainerStarted","Data":"3bc9037852198adcfee4eae041b89487d7df0a67daa3039fef14e7e9e63f5591"} Dec 01 15:08:45 crc kubenswrapper[4637]: I1201 15:08:45.474959 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:08:45 crc kubenswrapper[4637]: I1201 15:08:45.510986 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.510965289 podStartE2EDuration="36.510965289s" podCreationTimestamp="2025-12-01 15:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:08:45.503143197 +0000 UTC m=+1376.020852025" watchObservedRunningTime="2025-12-01 15:08:45.510965289 +0000 UTC m=+1376.028674117" Dec 01 15:08:45 crc kubenswrapper[4637]: I1201 15:08:45.614071 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:08:45 crc kubenswrapper[4637]: I1201 15:08:45.614141 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:08:45 crc kubenswrapper[4637]: I1201 15:08:45.614196 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:08:45 crc kubenswrapper[4637]: I1201 15:08:45.615206 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c5320ef3b59baf3d6a19e6cd72f308b7cd46bf2e7050ff92c6f67ab6ef1839a"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:08:45 crc kubenswrapper[4637]: I1201 15:08:45.615279 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://0c5320ef3b59baf3d6a19e6cd72f308b7cd46bf2e7050ff92c6f67ab6ef1839a" gracePeriod=600 Dec 01 15:08:46 crc kubenswrapper[4637]: I1201 15:08:46.489403 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="0c5320ef3b59baf3d6a19e6cd72f308b7cd46bf2e7050ff92c6f67ab6ef1839a" exitCode=0 Dec 01 15:08:46 crc kubenswrapper[4637]: I1201 15:08:46.489482 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"0c5320ef3b59baf3d6a19e6cd72f308b7cd46bf2e7050ff92c6f67ab6ef1839a"} Dec 01 15:08:46 crc kubenswrapper[4637]: I1201 15:08:46.490251 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8"} Dec 01 15:08:46 crc kubenswrapper[4637]: I1201 15:08:46.490278 4637 scope.go:117] "RemoveContainer" containerID="b32cce2c47c067f58e7391d6910b6c3148987eb146b9d1e7fc73d2cd86f483da" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.709674 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9"] Dec 01 15:08:51 crc kubenswrapper[4637]: E1201 15:08:51.710640 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" containerName="dnsmasq-dns" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.710655 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" containerName="dnsmasq-dns" Dec 01 15:08:51 crc kubenswrapper[4637]: E1201 15:08:51.710676 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" containerName="dnsmasq-dns" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.710682 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" containerName="dnsmasq-dns" Dec 01 15:08:51 crc kubenswrapper[4637]: E1201 15:08:51.710707 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" containerName="init" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.710714 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" containerName="init" Dec 01 15:08:51 crc kubenswrapper[4637]: E1201 15:08:51.710727 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" containerName="init" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.710733 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" containerName="init" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.710900 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e8faf04-5caa-4ba4-8100-866d7c7182f8" containerName="dnsmasq-dns" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.710918 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="be11f54b-290b-4bd3-9f5f-8a2f78d49bcc" containerName="dnsmasq-dns" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.711694 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.727208 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.727905 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.729286 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.729993 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.781405 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9"] Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.844406 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46kn\" (UniqueName: \"kubernetes.io/projected/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-kube-api-access-j46kn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.844678 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.844714 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.844754 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.946248 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.946340 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46kn\" (UniqueName: \"kubernetes.io/projected/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-kube-api-access-j46kn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.946535 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.946568 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.953881 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.954037 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.955402 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:51 crc kubenswrapper[4637]: I1201 15:08:51.967825 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46kn\" (UniqueName: \"kubernetes.io/projected/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-kube-api-access-j46kn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:52 crc kubenswrapper[4637]: I1201 15:08:52.054661 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:08:53 crc kubenswrapper[4637]: I1201 15:08:52.937445 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9"] Dec 01 15:08:53 crc kubenswrapper[4637]: I1201 15:08:53.567002 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" event={"ID":"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea","Type":"ContainerStarted","Data":"50e79bfd0ba0f0ff8acf23af3d58e8a65be56bd6992b87fcb9ee05200144416d"} Dec 01 15:08:58 crc kubenswrapper[4637]: I1201 15:08:58.049271 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 01 15:08:59 crc kubenswrapper[4637]: I1201 15:08:59.426145 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 01 15:09:06 crc kubenswrapper[4637]: I1201 15:09:06.735714 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" event={"ID":"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea","Type":"ContainerStarted","Data":"ed2588bc733781aca510d61f0973287418f8037812841beb24d3e71e5845b435"} Dec 01 15:09:06 crc kubenswrapper[4637]: I1201 15:09:06.760528 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" podStartSLOduration=2.915958077 podStartE2EDuration="15.76050289s" podCreationTimestamp="2025-12-01 15:08:51 +0000 UTC" firstStartedPulling="2025-12-01 15:08:52.92130451 +0000 UTC m=+1383.439013338" lastFinishedPulling="2025-12-01 15:09:05.765849323 +0000 UTC m=+1396.283558151" observedRunningTime="2025-12-01 15:09:06.753444529 +0000 UTC m=+1397.271153357" watchObservedRunningTime="2025-12-01 15:09:06.76050289 +0000 UTC m=+1397.278211708" Dec 01 15:09:18 crc kubenswrapper[4637]: I1201 15:09:18.844230 4637 generic.go:334] "Generic (PLEG): container finished" podID="f42fd0ac-99c4-49ed-87d0-fe00a580a2ea" containerID="ed2588bc733781aca510d61f0973287418f8037812841beb24d3e71e5845b435" exitCode=0 Dec 01 15:09:18 crc kubenswrapper[4637]: I1201 15:09:18.844322 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" event={"ID":"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea","Type":"ContainerDied","Data":"ed2588bc733781aca510d61f0973287418f8037812841beb24d3e71e5845b435"} Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.305202 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.447018 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46kn\" (UniqueName: \"kubernetes.io/projected/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-kube-api-access-j46kn\") pod \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.447079 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-ssh-key\") pod \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.447158 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-repo-setup-combined-ca-bundle\") pod \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.447200 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-inventory\") pod \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\" (UID: \"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea\") " Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.454944 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-kube-api-access-j46kn" (OuterVolumeSpecName: "kube-api-access-j46kn") pod "f42fd0ac-99c4-49ed-87d0-fe00a580a2ea" (UID: "f42fd0ac-99c4-49ed-87d0-fe00a580a2ea"). InnerVolumeSpecName "kube-api-access-j46kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.455662 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f42fd0ac-99c4-49ed-87d0-fe00a580a2ea" (UID: "f42fd0ac-99c4-49ed-87d0-fe00a580a2ea"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.479338 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-inventory" (OuterVolumeSpecName: "inventory") pod "f42fd0ac-99c4-49ed-87d0-fe00a580a2ea" (UID: "f42fd0ac-99c4-49ed-87d0-fe00a580a2ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.485881 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f42fd0ac-99c4-49ed-87d0-fe00a580a2ea" (UID: "f42fd0ac-99c4-49ed-87d0-fe00a580a2ea"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.549363 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j46kn\" (UniqueName: \"kubernetes.io/projected/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-kube-api-access-j46kn\") on node \"crc\" DevicePath \"\"" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.549401 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.549416 4637 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.549430 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f42fd0ac-99c4-49ed-87d0-fe00a580a2ea-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.873018 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" event={"ID":"f42fd0ac-99c4-49ed-87d0-fe00a580a2ea","Type":"ContainerDied","Data":"50e79bfd0ba0f0ff8acf23af3d58e8a65be56bd6992b87fcb9ee05200144416d"} Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.873083 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e79bfd0ba0f0ff8acf23af3d58e8a65be56bd6992b87fcb9ee05200144416d" Dec 01 15:09:20 crc kubenswrapper[4637]: I1201 15:09:20.873172 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.020846 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5"] Dec 01 15:09:21 crc kubenswrapper[4637]: E1201 15:09:21.021547 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42fd0ac-99c4-49ed-87d0-fe00a580a2ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.021568 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42fd0ac-99c4-49ed-87d0-fe00a580a2ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.021751 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42fd0ac-99c4-49ed-87d0-fe00a580a2ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.022362 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.024293 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.027388 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.027585 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.030307 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.058071 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5"] Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.160398 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.160598 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfcxj\" (UniqueName: \"kubernetes.io/projected/24361437-f549-45e9-af51-6a842e4bc82e-kube-api-access-vfcxj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.160650 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.262969 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.263075 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.263178 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfcxj\" (UniqueName: \"kubernetes.io/projected/24361437-f549-45e9-af51-6a842e4bc82e-kube-api-access-vfcxj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.270998 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.276741 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.279954 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfcxj\" (UniqueName: \"kubernetes.io/projected/24361437-f549-45e9-af51-6a842e4bc82e-kube-api-access-vfcxj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mlpn5\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.343316 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:21 crc kubenswrapper[4637]: I1201 15:09:21.868907 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5"] Dec 01 15:09:22 crc kubenswrapper[4637]: I1201 15:09:22.894924 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" event={"ID":"24361437-f549-45e9-af51-6a842e4bc82e","Type":"ContainerStarted","Data":"b1560033d637e45d33486bab25c05656288dc6ae600234813e4301cb96ed5a5b"} Dec 01 15:09:23 crc kubenswrapper[4637]: I1201 15:09:23.912366 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" event={"ID":"24361437-f549-45e9-af51-6a842e4bc82e","Type":"ContainerStarted","Data":"1920d4a954a2f10bfeaf99e316a0bb8bd51dbc5736dd634c323af6578f81b9b7"} Dec 01 15:09:23 crc kubenswrapper[4637]: I1201 15:09:23.942650 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" podStartSLOduration=2.185827857 podStartE2EDuration="2.942629058s" podCreationTimestamp="2025-12-01 15:09:21 +0000 UTC" firstStartedPulling="2025-12-01 15:09:21.882564098 +0000 UTC m=+1412.400272926" lastFinishedPulling="2025-12-01 15:09:22.639365299 +0000 UTC m=+1413.157074127" observedRunningTime="2025-12-01 15:09:23.936233265 +0000 UTC m=+1414.453942153" watchObservedRunningTime="2025-12-01 15:09:23.942629058 +0000 UTC m=+1414.460337896" Dec 01 15:09:25 crc kubenswrapper[4637]: I1201 15:09:25.930660 4637 generic.go:334] "Generic (PLEG): container finished" podID="24361437-f549-45e9-af51-6a842e4bc82e" containerID="1920d4a954a2f10bfeaf99e316a0bb8bd51dbc5736dd634c323af6578f81b9b7" exitCode=0 Dec 01 15:09:25 crc kubenswrapper[4637]: I1201 15:09:25.930694 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" event={"ID":"24361437-f549-45e9-af51-6a842e4bc82e","Type":"ContainerDied","Data":"1920d4a954a2f10bfeaf99e316a0bb8bd51dbc5736dd634c323af6578f81b9b7"} Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.328249 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.504398 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-inventory\") pod \"24361437-f549-45e9-af51-6a842e4bc82e\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.504775 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfcxj\" (UniqueName: \"kubernetes.io/projected/24361437-f549-45e9-af51-6a842e4bc82e-kube-api-access-vfcxj\") pod \"24361437-f549-45e9-af51-6a842e4bc82e\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.504918 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-ssh-key\") pod \"24361437-f549-45e9-af51-6a842e4bc82e\" (UID: \"24361437-f549-45e9-af51-6a842e4bc82e\") " Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.511016 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24361437-f549-45e9-af51-6a842e4bc82e-kube-api-access-vfcxj" (OuterVolumeSpecName: "kube-api-access-vfcxj") pod "24361437-f549-45e9-af51-6a842e4bc82e" (UID: "24361437-f549-45e9-af51-6a842e4bc82e"). InnerVolumeSpecName "kube-api-access-vfcxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.535409 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-inventory" (OuterVolumeSpecName: "inventory") pod "24361437-f549-45e9-af51-6a842e4bc82e" (UID: "24361437-f549-45e9-af51-6a842e4bc82e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.548787 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "24361437-f549-45e9-af51-6a842e4bc82e" (UID: "24361437-f549-45e9-af51-6a842e4bc82e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.608199 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.608887 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24361437-f549-45e9-af51-6a842e4bc82e-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.609001 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfcxj\" (UniqueName: \"kubernetes.io/projected/24361437-f549-45e9-af51-6a842e4bc82e-kube-api-access-vfcxj\") on node \"crc\" DevicePath \"\"" Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.952057 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" event={"ID":"24361437-f549-45e9-af51-6a842e4bc82e","Type":"ContainerDied","Data":"b1560033d637e45d33486bab25c05656288dc6ae600234813e4301cb96ed5a5b"} Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.952097 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1560033d637e45d33486bab25c05656288dc6ae600234813e4301cb96ed5a5b" Dec 01 15:09:27 crc kubenswrapper[4637]: I1201 15:09:27.952116 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mlpn5" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.432184 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r"] Dec 01 15:09:28 crc kubenswrapper[4637]: E1201 15:09:28.433233 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24361437-f549-45e9-af51-6a842e4bc82e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.433334 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="24361437-f549-45e9-af51-6a842e4bc82e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.433565 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="24361437-f549-45e9-af51-6a842e4bc82e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.434262 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.437142 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.437584 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.440367 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.447953 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.450263 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r"] Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.524713 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.525012 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6kkp\" (UniqueName: \"kubernetes.io/projected/290aad22-6654-4895-ae47-8651471b42e6-kube-api-access-m6kkp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.525153 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.525271 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.626872 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.627090 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6kkp\" (UniqueName: \"kubernetes.io/projected/290aad22-6654-4895-ae47-8651471b42e6-kube-api-access-m6kkp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.627199 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.627323 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.632566 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.632760 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.633925 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.647133 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6kkp\" (UniqueName: \"kubernetes.io/projected/290aad22-6654-4895-ae47-8651471b42e6-kube-api-access-m6kkp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:28 crc kubenswrapper[4637]: I1201 15:09:28.752827 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:09:29 crc kubenswrapper[4637]: I1201 15:09:29.378966 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r"] Dec 01 15:09:29 crc kubenswrapper[4637]: I1201 15:09:29.973354 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" event={"ID":"290aad22-6654-4895-ae47-8651471b42e6","Type":"ContainerStarted","Data":"493b3501f879494f9e4e17a60ee323d0d54fc7bde46e709eab75dfceaae489eb"} Dec 01 15:09:31 crc kubenswrapper[4637]: I1201 15:09:31.008496 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" event={"ID":"290aad22-6654-4895-ae47-8651471b42e6","Type":"ContainerStarted","Data":"1bc45b375f5edc0fb5c3e398a6e3be0d7d959499040a86425870ee969595c4a9"} Dec 01 15:09:31 crc kubenswrapper[4637]: I1201 15:09:31.040565 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" podStartSLOduration=2.42812896 podStartE2EDuration="3.040546023s" podCreationTimestamp="2025-12-01 15:09:28 +0000 UTC" firstStartedPulling="2025-12-01 15:09:29.385922986 +0000 UTC m=+1419.903631824" lastFinishedPulling="2025-12-01 15:09:29.998340059 +0000 UTC m=+1420.516048887" observedRunningTime="2025-12-01 15:09:31.0256098 +0000 UTC m=+1421.543318638" watchObservedRunningTime="2025-12-01 15:09:31.040546023 +0000 UTC m=+1421.558254851" Dec 01 15:09:31 crc kubenswrapper[4637]: I1201 15:09:31.990573 4637 scope.go:117] "RemoveContainer" containerID="20107288aa1c7ee4b1420f18544a41c6bcd7cedfbbe2c33eb0599adb88be0591" Dec 01 15:10:32 crc kubenswrapper[4637]: I1201 15:10:32.052114 4637 scope.go:117] "RemoveContainer" containerID="05baf4e48431d86d1db24e1c31feda024361d5cb7bd53c067a91d2438d5e21b9" Dec 01 15:10:32 crc kubenswrapper[4637]: I1201 15:10:32.074031 4637 scope.go:117] "RemoveContainer" containerID="bb61d65a183af09f46193ff5def82d41335b0928adf9ce99fc5ec78541be8c3e" Dec 01 15:10:32 crc kubenswrapper[4637]: I1201 15:10:32.110453 4637 scope.go:117] "RemoveContainer" containerID="15dc05e4253bfaee5d3e148c008f60d4dfa934c274ce587be883c00844f3a7d8" Dec 01 15:10:45 crc kubenswrapper[4637]: I1201 15:10:45.613611 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:10:45 crc kubenswrapper[4637]: I1201 15:10:45.614155 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:11:15 crc kubenswrapper[4637]: I1201 15:11:15.613423 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:11:15 crc kubenswrapper[4637]: I1201 15:11:15.614004 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.430012 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gftc2"] Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.436100 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.461280 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gftc2"] Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.479015 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-utilities\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.479111 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-catalog-content\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.479272 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctzg\" (UniqueName: \"kubernetes.io/projected/d8dea35a-d142-4d51-9045-eba9f8449490-kube-api-access-7ctzg\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.580892 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctzg\" (UniqueName: \"kubernetes.io/projected/d8dea35a-d142-4d51-9045-eba9f8449490-kube-api-access-7ctzg\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.581039 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-utilities\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.581107 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-catalog-content\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.581624 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-utilities\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.581701 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-catalog-content\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.602010 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctzg\" (UniqueName: \"kubernetes.io/projected/d8dea35a-d142-4d51-9045-eba9f8449490-kube-api-access-7ctzg\") pod \"community-operators-gftc2\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:39 crc kubenswrapper[4637]: I1201 15:11:39.757681 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:40 crc kubenswrapper[4637]: I1201 15:11:40.254920 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gftc2"] Dec 01 15:11:40 crc kubenswrapper[4637]: I1201 15:11:40.314456 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gftc2" event={"ID":"d8dea35a-d142-4d51-9045-eba9f8449490","Type":"ContainerStarted","Data":"601fe9f829343b5828e945f5f23b059c9b461ae4ab1e01de5f2870012393e30b"} Dec 01 15:11:41 crc kubenswrapper[4637]: I1201 15:11:41.326183 4637 generic.go:334] "Generic (PLEG): container finished" podID="d8dea35a-d142-4d51-9045-eba9f8449490" containerID="85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c" exitCode=0 Dec 01 15:11:41 crc kubenswrapper[4637]: I1201 15:11:41.326288 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gftc2" event={"ID":"d8dea35a-d142-4d51-9045-eba9f8449490","Type":"ContainerDied","Data":"85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c"} Dec 01 15:11:41 crc kubenswrapper[4637]: I1201 15:11:41.329422 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:11:45 crc kubenswrapper[4637]: I1201 15:11:45.613706 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:11:45 crc kubenswrapper[4637]: I1201 15:11:45.614715 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:11:45 crc kubenswrapper[4637]: I1201 15:11:45.614785 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:11:45 crc kubenswrapper[4637]: I1201 15:11:45.615659 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:11:45 crc kubenswrapper[4637]: I1201 15:11:45.615878 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" gracePeriod=600 Dec 01 15:11:46 crc kubenswrapper[4637]: I1201 15:11:46.387698 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" exitCode=0 Dec 01 15:11:46 crc kubenswrapper[4637]: I1201 15:11:46.387845 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8"} Dec 01 15:11:46 crc kubenswrapper[4637]: I1201 15:11:46.388174 4637 scope.go:117] "RemoveContainer" containerID="0c5320ef3b59baf3d6a19e6cd72f308b7cd46bf2e7050ff92c6f67ab6ef1839a" Dec 01 15:11:46 crc kubenswrapper[4637]: E1201 15:11:46.424873 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:11:47 crc kubenswrapper[4637]: I1201 15:11:47.403598 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:11:47 crc kubenswrapper[4637]: E1201 15:11:47.404373 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:11:47 crc kubenswrapper[4637]: I1201 15:11:47.405059 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gftc2" event={"ID":"d8dea35a-d142-4d51-9045-eba9f8449490","Type":"ContainerStarted","Data":"24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd"} Dec 01 15:11:48 crc kubenswrapper[4637]: I1201 15:11:48.414493 4637 generic.go:334] "Generic (PLEG): container finished" podID="d8dea35a-d142-4d51-9045-eba9f8449490" containerID="24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd" exitCode=0 Dec 01 15:11:48 crc kubenswrapper[4637]: I1201 15:11:48.414547 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gftc2" event={"ID":"d8dea35a-d142-4d51-9045-eba9f8449490","Type":"ContainerDied","Data":"24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd"} Dec 01 15:11:49 crc kubenswrapper[4637]: I1201 15:11:49.424971 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gftc2" event={"ID":"d8dea35a-d142-4d51-9045-eba9f8449490","Type":"ContainerStarted","Data":"f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4"} Dec 01 15:11:49 crc kubenswrapper[4637]: I1201 15:11:49.452296 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gftc2" podStartSLOduration=2.860606979 podStartE2EDuration="10.452269903s" podCreationTimestamp="2025-12-01 15:11:39 +0000 UTC" firstStartedPulling="2025-12-01 15:11:41.328828477 +0000 UTC m=+1551.846537305" lastFinishedPulling="2025-12-01 15:11:48.920491361 +0000 UTC m=+1559.438200229" observedRunningTime="2025-12-01 15:11:49.447484333 +0000 UTC m=+1559.965193161" watchObservedRunningTime="2025-12-01 15:11:49.452269903 +0000 UTC m=+1559.969978731" Dec 01 15:11:49 crc kubenswrapper[4637]: I1201 15:11:49.758384 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:49 crc kubenswrapper[4637]: I1201 15:11:49.758445 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:11:50 crc kubenswrapper[4637]: I1201 15:11:50.812200 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gftc2" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="registry-server" probeResult="failure" output=< Dec 01 15:11:50 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:11:50 crc kubenswrapper[4637]: > Dec 01 15:12:00 crc kubenswrapper[4637]: I1201 15:12:00.073341 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:12:00 crc kubenswrapper[4637]: E1201 15:12:00.126595 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:12:00 crc kubenswrapper[4637]: I1201 15:12:00.193545 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:12:00 crc kubenswrapper[4637]: I1201 15:12:00.280019 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gftc2" Dec 01 15:12:00 crc kubenswrapper[4637]: I1201 15:12:00.366139 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gftc2"] Dec 01 15:12:00 crc kubenswrapper[4637]: I1201 15:12:00.474432 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gbh9m"] Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.096440 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gbh9m" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" containerName="registry-server" containerID="cri-o://42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b" gracePeriod=2 Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.714045 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gbh9m" Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.725589 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-utilities\") pod \"8331a591-a7d8-4c36-ae47-f973a8468986\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.725848 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-catalog-content\") pod \"8331a591-a7d8-4c36-ae47-f973a8468986\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.725882 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngjcb\" (UniqueName: \"kubernetes.io/projected/8331a591-a7d8-4c36-ae47-f973a8468986-kube-api-access-ngjcb\") pod \"8331a591-a7d8-4c36-ae47-f973a8468986\" (UID: \"8331a591-a7d8-4c36-ae47-f973a8468986\") " Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.730702 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-utilities" (OuterVolumeSpecName: "utilities") pod "8331a591-a7d8-4c36-ae47-f973a8468986" (UID: "8331a591-a7d8-4c36-ae47-f973a8468986"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.742171 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8331a591-a7d8-4c36-ae47-f973a8468986-kube-api-access-ngjcb" (OuterVolumeSpecName: "kube-api-access-ngjcb") pod "8331a591-a7d8-4c36-ae47-f973a8468986" (UID: "8331a591-a7d8-4c36-ae47-f973a8468986"). InnerVolumeSpecName "kube-api-access-ngjcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.836745 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngjcb\" (UniqueName: \"kubernetes.io/projected/8331a591-a7d8-4c36-ae47-f973a8468986-kube-api-access-ngjcb\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.836785 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.859902 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8331a591-a7d8-4c36-ae47-f973a8468986" (UID: "8331a591-a7d8-4c36-ae47-f973a8468986"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:12:01 crc kubenswrapper[4637]: I1201 15:12:01.938661 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8331a591-a7d8-4c36-ae47-f973a8468986-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.140617 4637 generic.go:334] "Generic (PLEG): container finished" podID="8331a591-a7d8-4c36-ae47-f973a8468986" containerID="42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b" exitCode=0 Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.140677 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbh9m" event={"ID":"8331a591-a7d8-4c36-ae47-f973a8468986","Type":"ContainerDied","Data":"42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b"} Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.140720 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbh9m" event={"ID":"8331a591-a7d8-4c36-ae47-f973a8468986","Type":"ContainerDied","Data":"1bde9b1d73b81013b4f9fa92f0e952962fce973bb0b7bd2cd964f70f0d1f196b"} Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.140740 4637 scope.go:117] "RemoveContainer" containerID="42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.141823 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gbh9m" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.206388 4637 scope.go:117] "RemoveContainer" containerID="1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.211043 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gbh9m"] Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.221207 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gbh9m"] Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.238518 4637 scope.go:117] "RemoveContainer" containerID="71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.277508 4637 scope.go:117] "RemoveContainer" containerID="42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b" Dec 01 15:12:02 crc kubenswrapper[4637]: E1201 15:12:02.279048 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b\": container with ID starting with 42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b not found: ID does not exist" containerID="42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.279097 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b"} err="failed to get container status \"42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b\": rpc error: code = NotFound desc = could not find container \"42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b\": container with ID starting with 42a163611de55e35b6f178678d3b655bc35cc611d29c752c0e335c2d018a597b not found: ID does not exist" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.279555 4637 scope.go:117] "RemoveContainer" containerID="1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1" Dec 01 15:12:02 crc kubenswrapper[4637]: E1201 15:12:02.281255 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1\": container with ID starting with 1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1 not found: ID does not exist" containerID="1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.281350 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1"} err="failed to get container status \"1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1\": rpc error: code = NotFound desc = could not find container \"1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1\": container with ID starting with 1f3619e663b8c73d6d295f48e61eb7a33422a54d09245a184ed5d0a60b85b4e1 not found: ID does not exist" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.281449 4637 scope.go:117] "RemoveContainer" containerID="71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f" Dec 01 15:12:02 crc kubenswrapper[4637]: E1201 15:12:02.281802 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f\": container with ID starting with 71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f not found: ID does not exist" containerID="71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f" Dec 01 15:12:02 crc kubenswrapper[4637]: I1201 15:12:02.281856 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f"} err="failed to get container status \"71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f\": rpc error: code = NotFound desc = could not find container \"71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f\": container with ID starting with 71c58bb4e7fdfae3241301408d1a4c6ecd4ae61d68af8398bc20efcbfcbc2f4f not found: ID does not exist" Dec 01 15:12:03 crc kubenswrapper[4637]: I1201 15:12:03.784103 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" path="/var/lib/kubelet/pods/8331a591-a7d8-4c36-ae47-f973a8468986/volumes" Dec 01 15:12:11 crc kubenswrapper[4637]: I1201 15:12:11.772136 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:12:11 crc kubenswrapper[4637]: E1201 15:12:11.772983 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:12:23 crc kubenswrapper[4637]: I1201 15:12:23.772336 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:12:23 crc kubenswrapper[4637]: E1201 15:12:23.773710 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.042819 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f4xh2"] Dec 01 15:12:29 crc kubenswrapper[4637]: E1201 15:12:29.044353 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" containerName="registry-server" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.044376 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" containerName="registry-server" Dec 01 15:12:29 crc kubenswrapper[4637]: E1201 15:12:29.044398 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" containerName="extract-content" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.044409 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" containerName="extract-content" Dec 01 15:12:29 crc kubenswrapper[4637]: E1201 15:12:29.044426 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" containerName="extract-utilities" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.044434 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" containerName="extract-utilities" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.044720 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="8331a591-a7d8-4c36-ae47-f973a8468986" containerName="registry-server" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.046884 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.064894 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4xh2"] Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.146635 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-catalog-content\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.146682 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-utilities\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.147350 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5c87\" (UniqueName: \"kubernetes.io/projected/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-kube-api-access-m5c87\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.249020 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-catalog-content\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.249076 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-utilities\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.249226 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5c87\" (UniqueName: \"kubernetes.io/projected/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-kube-api-access-m5c87\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.249792 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-catalog-content\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.249918 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-utilities\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.275537 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5c87\" (UniqueName: \"kubernetes.io/projected/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-kube-api-access-m5c87\") pod \"redhat-operators-f4xh2\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.375478 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:29 crc kubenswrapper[4637]: I1201 15:12:29.841205 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4xh2"] Dec 01 15:12:30 crc kubenswrapper[4637]: I1201 15:12:30.396420 4637 generic.go:334] "Generic (PLEG): container finished" podID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerID="cf3c5f74a679cf7926ba56e97fc17432eb98877ef3ffdb3f8fe75b00d5b85ad0" exitCode=0 Dec 01 15:12:30 crc kubenswrapper[4637]: I1201 15:12:30.396514 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4xh2" event={"ID":"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3","Type":"ContainerDied","Data":"cf3c5f74a679cf7926ba56e97fc17432eb98877ef3ffdb3f8fe75b00d5b85ad0"} Dec 01 15:12:30 crc kubenswrapper[4637]: I1201 15:12:30.396579 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4xh2" event={"ID":"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3","Type":"ContainerStarted","Data":"4db09ca9ba47251d1cebe0c3404aabbf988193bd08bab0ba4be92dfd0bbba3ed"} Dec 01 15:12:32 crc kubenswrapper[4637]: I1201 15:12:32.210496 4637 scope.go:117] "RemoveContainer" containerID="78079b022401b03e285b0f0813807f55476f29097b8cd4c98c9e8d97cab0d1a9" Dec 01 15:12:32 crc kubenswrapper[4637]: I1201 15:12:32.232281 4637 scope.go:117] "RemoveContainer" containerID="cb655c74ff501d29e7160a5e02cbd9f14abccb75a96496b717fd4e5f1fc2004e" Dec 01 15:12:32 crc kubenswrapper[4637]: I1201 15:12:32.418381 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4xh2" event={"ID":"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3","Type":"ContainerStarted","Data":"a0b177fb2a4752086bcfbabade4accf3dd6afbe4e55f8c23c33acaf8038df488"} Dec 01 15:12:35 crc kubenswrapper[4637]: I1201 15:12:35.446109 4637 generic.go:334] "Generic (PLEG): container finished" podID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerID="a0b177fb2a4752086bcfbabade4accf3dd6afbe4e55f8c23c33acaf8038df488" exitCode=0 Dec 01 15:12:35 crc kubenswrapper[4637]: I1201 15:12:35.446225 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4xh2" event={"ID":"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3","Type":"ContainerDied","Data":"a0b177fb2a4752086bcfbabade4accf3dd6afbe4e55f8c23c33acaf8038df488"} Dec 01 15:12:36 crc kubenswrapper[4637]: I1201 15:12:36.456250 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4xh2" event={"ID":"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3","Type":"ContainerStarted","Data":"9810c839c734dc2891324d8d2a636a5dc04348fdd7fcc6aa436dcc11a87498b9"} Dec 01 15:12:36 crc kubenswrapper[4637]: I1201 15:12:36.480665 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f4xh2" podStartSLOduration=2.006019646 podStartE2EDuration="7.480644086s" podCreationTimestamp="2025-12-01 15:12:29 +0000 UTC" firstStartedPulling="2025-12-01 15:12:30.400548395 +0000 UTC m=+1600.918257233" lastFinishedPulling="2025-12-01 15:12:35.875172825 +0000 UTC m=+1606.392881673" observedRunningTime="2025-12-01 15:12:36.472278191 +0000 UTC m=+1606.989987019" watchObservedRunningTime="2025-12-01 15:12:36.480644086 +0000 UTC m=+1606.998352914" Dec 01 15:12:38 crc kubenswrapper[4637]: I1201 15:12:38.772713 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:12:38 crc kubenswrapper[4637]: E1201 15:12:38.773700 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:12:39 crc kubenswrapper[4637]: I1201 15:12:39.376139 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:39 crc kubenswrapper[4637]: I1201 15:12:39.376250 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:40 crc kubenswrapper[4637]: I1201 15:12:40.438362 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4xh2" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="registry-server" probeResult="failure" output=< Dec 01 15:12:40 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:12:40 crc kubenswrapper[4637]: > Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.819806 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q25vd"] Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.822375 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.841397 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q25vd"] Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.891318 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72scw\" (UniqueName: \"kubernetes.io/projected/bc13b662-0282-4e7c-bb90-90c34eb84dc6-kube-api-access-72scw\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.891476 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-utilities\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.892046 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-catalog-content\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.994045 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-utilities\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.994163 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-catalog-content\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.994306 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72scw\" (UniqueName: \"kubernetes.io/projected/bc13b662-0282-4e7c-bb90-90c34eb84dc6-kube-api-access-72scw\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.994571 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-utilities\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:44 crc kubenswrapper[4637]: I1201 15:12:44.994637 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-catalog-content\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:45 crc kubenswrapper[4637]: I1201 15:12:45.015750 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72scw\" (UniqueName: \"kubernetes.io/projected/bc13b662-0282-4e7c-bb90-90c34eb84dc6-kube-api-access-72scw\") pod \"certified-operators-q25vd\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:45 crc kubenswrapper[4637]: I1201 15:12:45.154869 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:45 crc kubenswrapper[4637]: I1201 15:12:45.672497 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q25vd"] Dec 01 15:12:46 crc kubenswrapper[4637]: I1201 15:12:46.539923 4637 generic.go:334] "Generic (PLEG): container finished" podID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerID="344b74bbc56af2bb60d6b612947549252fc301feb2d1b390c88604ac41649ddf" exitCode=0 Dec 01 15:12:46 crc kubenswrapper[4637]: I1201 15:12:46.539995 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q25vd" event={"ID":"bc13b662-0282-4e7c-bb90-90c34eb84dc6","Type":"ContainerDied","Data":"344b74bbc56af2bb60d6b612947549252fc301feb2d1b390c88604ac41649ddf"} Dec 01 15:12:46 crc kubenswrapper[4637]: I1201 15:12:46.540239 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q25vd" event={"ID":"bc13b662-0282-4e7c-bb90-90c34eb84dc6","Type":"ContainerStarted","Data":"7d7db492bbc0520e3a7be8e6aef470a392f6830ffab28c71fe3884dd9a634f59"} Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.055145 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4r967"] Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.067153 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-v7784"] Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.076740 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-f2htt"] Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.087130 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4r967"] Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.102996 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-f2htt"] Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.114086 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-v7784"] Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.425299 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.482388 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.786037 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a4b0da8-5193-4f28-944c-15c3d1f547d6" path="/var/lib/kubelet/pods/5a4b0da8-5193-4f28-944c-15c3d1f547d6/volumes" Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.787177 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97be1ae1-cfd9-421b-a02a-ea2f8d1be388" path="/var/lib/kubelet/pods/97be1ae1-cfd9-421b-a02a-ea2f8d1be388/volumes" Dec 01 15:12:49 crc kubenswrapper[4637]: I1201 15:12:49.788108 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9977d860-baa6-4a79-83ac-95e324302046" path="/var/lib/kubelet/pods/9977d860-baa6-4a79-83ac-95e324302046/volumes" Dec 01 15:12:50 crc kubenswrapper[4637]: I1201 15:12:50.186339 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4xh2"] Dec 01 15:12:50 crc kubenswrapper[4637]: I1201 15:12:50.574836 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4xh2" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="registry-server" containerID="cri-o://9810c839c734dc2891324d8d2a636a5dc04348fdd7fcc6aa436dcc11a87498b9" gracePeriod=2 Dec 01 15:12:50 crc kubenswrapper[4637]: I1201 15:12:50.771707 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:12:50 crc kubenswrapper[4637]: E1201 15:12:50.771987 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:12:51 crc kubenswrapper[4637]: I1201 15:12:51.597177 4637 generic.go:334] "Generic (PLEG): container finished" podID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerID="9810c839c734dc2891324d8d2a636a5dc04348fdd7fcc6aa436dcc11a87498b9" exitCode=0 Dec 01 15:12:51 crc kubenswrapper[4637]: I1201 15:12:51.597225 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4xh2" event={"ID":"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3","Type":"ContainerDied","Data":"9810c839c734dc2891324d8d2a636a5dc04348fdd7fcc6aa436dcc11a87498b9"} Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.613682 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4xh2" event={"ID":"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3","Type":"ContainerDied","Data":"4db09ca9ba47251d1cebe0c3404aabbf988193bd08bab0ba4be92dfd0bbba3ed"} Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.614562 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db09ca9ba47251d1cebe0c3404aabbf988193bd08bab0ba4be92dfd0bbba3ed" Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.683412 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.750264 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-utilities\") pod \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.750358 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-catalog-content\") pod \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.750610 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5c87\" (UniqueName: \"kubernetes.io/projected/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-kube-api-access-m5c87\") pod \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\" (UID: \"cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3\") " Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.751093 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-utilities" (OuterVolumeSpecName: "utilities") pod "cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" (UID: "cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.751526 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.757346 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-kube-api-access-m5c87" (OuterVolumeSpecName: "kube-api-access-m5c87") pod "cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" (UID: "cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3"). InnerVolumeSpecName "kube-api-access-m5c87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.853627 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5c87\" (UniqueName: \"kubernetes.io/projected/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-kube-api-access-m5c87\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.884851 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" (UID: "cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:12:52 crc kubenswrapper[4637]: I1201 15:12:52.956817 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:53 crc kubenswrapper[4637]: I1201 15:12:53.627217 4637 generic.go:334] "Generic (PLEG): container finished" podID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerID="001288c02b5cccb22dc4254a61f127185a60201f55b001bd721238f29d998487" exitCode=0 Dec 01 15:12:53 crc kubenswrapper[4637]: I1201 15:12:53.627357 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4xh2" Dec 01 15:12:53 crc kubenswrapper[4637]: I1201 15:12:53.629166 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q25vd" event={"ID":"bc13b662-0282-4e7c-bb90-90c34eb84dc6","Type":"ContainerDied","Data":"001288c02b5cccb22dc4254a61f127185a60201f55b001bd721238f29d998487"} Dec 01 15:12:53 crc kubenswrapper[4637]: I1201 15:12:53.700071 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4xh2"] Dec 01 15:12:53 crc kubenswrapper[4637]: I1201 15:12:53.709305 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4xh2"] Dec 01 15:12:53 crc kubenswrapper[4637]: I1201 15:12:53.787178 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" path="/var/lib/kubelet/pods/cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3/volumes" Dec 01 15:12:54 crc kubenswrapper[4637]: I1201 15:12:54.638762 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q25vd" event={"ID":"bc13b662-0282-4e7c-bb90-90c34eb84dc6","Type":"ContainerStarted","Data":"107b541d0604dbf1954782eebd6716cf10c9f78d78ac2c74daf2159b17adfe03"} Dec 01 15:12:54 crc kubenswrapper[4637]: I1201 15:12:54.678969 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q25vd" podStartSLOduration=2.952300209 podStartE2EDuration="10.678952161s" podCreationTimestamp="2025-12-01 15:12:44 +0000 UTC" firstStartedPulling="2025-12-01 15:12:46.542893284 +0000 UTC m=+1617.060602132" lastFinishedPulling="2025-12-01 15:12:54.269545256 +0000 UTC m=+1624.787254084" observedRunningTime="2025-12-01 15:12:54.671599134 +0000 UTC m=+1625.189307962" watchObservedRunningTime="2025-12-01 15:12:54.678952161 +0000 UTC m=+1625.196660989" Dec 01 15:12:55 crc kubenswrapper[4637]: I1201 15:12:55.155475 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:55 crc kubenswrapper[4637]: I1201 15:12:55.156478 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:12:56 crc kubenswrapper[4637]: I1201 15:12:56.203897 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q25vd" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="registry-server" probeResult="failure" output=< Dec 01 15:12:56 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:12:56 crc kubenswrapper[4637]: > Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.593862 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vffzp"] Dec 01 15:12:57 crc kubenswrapper[4637]: E1201 15:12:57.594643 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="extract-content" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.594658 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="extract-content" Dec 01 15:12:57 crc kubenswrapper[4637]: E1201 15:12:57.594683 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="extract-utilities" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.594690 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="extract-utilities" Dec 01 15:12:57 crc kubenswrapper[4637]: E1201 15:12:57.594708 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="registry-server" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.594714 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="registry-server" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.594895 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9ce6df-dc2e-4668-acf0-7a2f8cef2bb3" containerName="registry-server" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.596351 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.612205 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vffzp"] Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.671470 4637 generic.go:334] "Generic (PLEG): container finished" podID="290aad22-6654-4895-ae47-8651471b42e6" containerID="1bc45b375f5edc0fb5c3e398a6e3be0d7d959499040a86425870ee969595c4a9" exitCode=0 Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.671516 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" event={"ID":"290aad22-6654-4895-ae47-8651471b42e6","Type":"ContainerDied","Data":"1bc45b375f5edc0fb5c3e398a6e3be0d7d959499040a86425870ee969595c4a9"} Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.778883 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxl2k\" (UniqueName: \"kubernetes.io/projected/53487631-0885-4b5f-8421-8d5faeca466d-kube-api-access-dxl2k\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.779049 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-catalog-content\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.779412 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-utilities\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.881298 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-utilities\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.881742 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxl2k\" (UniqueName: \"kubernetes.io/projected/53487631-0885-4b5f-8421-8d5faeca466d-kube-api-access-dxl2k\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.881925 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-utilities\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.882028 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-catalog-content\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.883074 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-catalog-content\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.906469 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxl2k\" (UniqueName: \"kubernetes.io/projected/53487631-0885-4b5f-8421-8d5faeca466d-kube-api-access-dxl2k\") pod \"redhat-marketplace-vffzp\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:57 crc kubenswrapper[4637]: I1201 15:12:57.934964 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:12:58 crc kubenswrapper[4637]: I1201 15:12:58.040875 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8b91-account-create-7ngwt"] Dec 01 15:12:58 crc kubenswrapper[4637]: I1201 15:12:58.051328 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8b91-account-create-7ngwt"] Dec 01 15:12:58 crc kubenswrapper[4637]: I1201 15:12:58.302012 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vffzp"] Dec 01 15:12:58 crc kubenswrapper[4637]: I1201 15:12:58.684427 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vffzp" event={"ID":"53487631-0885-4b5f-8421-8d5faeca466d","Type":"ContainerStarted","Data":"de275c59218d9895a923c469c229d58f14ae867c9833dca3abe81cbcfae4adeb"} Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.101978 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-569f-account-create-5sxsf"] Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.132009 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0d53-account-create-fphvj"] Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.143986 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-569f-account-create-5sxsf"] Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.148672 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0d53-account-create-fphvj"] Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.502367 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.631982 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6kkp\" (UniqueName: \"kubernetes.io/projected/290aad22-6654-4895-ae47-8651471b42e6-kube-api-access-m6kkp\") pod \"290aad22-6654-4895-ae47-8651471b42e6\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.632158 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-inventory\") pod \"290aad22-6654-4895-ae47-8651471b42e6\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.632232 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-ssh-key\") pod \"290aad22-6654-4895-ae47-8651471b42e6\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.632372 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-bootstrap-combined-ca-bundle\") pod \"290aad22-6654-4895-ae47-8651471b42e6\" (UID: \"290aad22-6654-4895-ae47-8651471b42e6\") " Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.656279 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290aad22-6654-4895-ae47-8651471b42e6-kube-api-access-m6kkp" (OuterVolumeSpecName: "kube-api-access-m6kkp") pod "290aad22-6654-4895-ae47-8651471b42e6" (UID: "290aad22-6654-4895-ae47-8651471b42e6"). InnerVolumeSpecName "kube-api-access-m6kkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.677683 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "290aad22-6654-4895-ae47-8651471b42e6" (UID: "290aad22-6654-4895-ae47-8651471b42e6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.694976 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "290aad22-6654-4895-ae47-8651471b42e6" (UID: "290aad22-6654-4895-ae47-8651471b42e6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.713332 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-inventory" (OuterVolumeSpecName: "inventory") pod "290aad22-6654-4895-ae47-8651471b42e6" (UID: "290aad22-6654-4895-ae47-8651471b42e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.725963 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" event={"ID":"290aad22-6654-4895-ae47-8651471b42e6","Type":"ContainerDied","Data":"493b3501f879494f9e4e17a60ee323d0d54fc7bde46e709eab75dfceaae489eb"} Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.726006 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="493b3501f879494f9e4e17a60ee323d0d54fc7bde46e709eab75dfceaae489eb" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.726103 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.732510 4637 generic.go:334] "Generic (PLEG): container finished" podID="53487631-0885-4b5f-8421-8d5faeca466d" containerID="e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d" exitCode=0 Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.732754 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vffzp" event={"ID":"53487631-0885-4b5f-8421-8d5faeca466d","Type":"ContainerDied","Data":"e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d"} Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.748864 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.748908 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.748927 4637 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/290aad22-6654-4895-ae47-8651471b42e6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.748956 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6kkp\" (UniqueName: \"kubernetes.io/projected/290aad22-6654-4895-ae47-8651471b42e6-kube-api-access-m6kkp\") on node \"crc\" DevicePath \"\"" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.852475 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="510d80d9-6c2b-4128-bbb7-03c03e1b68dc" path="/var/lib/kubelet/pods/510d80d9-6c2b-4128-bbb7-03c03e1b68dc/volumes" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.853972 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7381386b-37bd-4588-ae5e-40f50e89e0e2" path="/var/lib/kubelet/pods/7381386b-37bd-4588-ae5e-40f50e89e0e2/volumes" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.854590 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60b5eb9-075c-48f5-87b1-9bb3b870f589" path="/var/lib/kubelet/pods/d60b5eb9-075c-48f5-87b1-9bb3b870f589/volumes" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.867568 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p"] Dec 01 15:12:59 crc kubenswrapper[4637]: E1201 15:12:59.868422 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290aad22-6654-4895-ae47-8651471b42e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.868445 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="290aad22-6654-4895-ae47-8651471b42e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.868729 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="290aad22-6654-4895-ae47-8651471b42e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.870232 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.879569 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.879830 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.879896 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.879833 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.886429 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p"] Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.957211 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.957535 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:12:59 crc kubenswrapper[4637]: I1201 15:12:59.957711 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qllb\" (UniqueName: \"kubernetes.io/projected/0103da10-320d-4303-8498-e0f06d9e97f4-kube-api-access-6qllb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:13:00 crc kubenswrapper[4637]: I1201 15:13:00.059871 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qllb\" (UniqueName: \"kubernetes.io/projected/0103da10-320d-4303-8498-e0f06d9e97f4-kube-api-access-6qllb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:13:00 crc kubenswrapper[4637]: I1201 15:13:00.060092 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:13:00 crc kubenswrapper[4637]: I1201 15:13:00.060188 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:13:00 crc kubenswrapper[4637]: I1201 15:13:00.066308 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:13:00 crc kubenswrapper[4637]: I1201 15:13:00.066392 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:13:00 crc kubenswrapper[4637]: I1201 15:13:00.085793 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qllb\" (UniqueName: \"kubernetes.io/projected/0103da10-320d-4303-8498-e0f06d9e97f4-kube-api-access-6qllb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ph98p\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:13:00 crc kubenswrapper[4637]: I1201 15:13:00.212668 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:13:00 crc kubenswrapper[4637]: I1201 15:13:00.809415 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p"] Dec 01 15:13:00 crc kubenswrapper[4637]: W1201 15:13:00.818258 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0103da10_320d_4303_8498_e0f06d9e97f4.slice/crio-35dcc58b65c58aaef94a66c77b931f39dd90c966e989d3f64f92cf41566bb9fd WatchSource:0}: Error finding container 35dcc58b65c58aaef94a66c77b931f39dd90c966e989d3f64f92cf41566bb9fd: Status 404 returned error can't find the container with id 35dcc58b65c58aaef94a66c77b931f39dd90c966e989d3f64f92cf41566bb9fd Dec 01 15:13:01 crc kubenswrapper[4637]: I1201 15:13:01.763237 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" event={"ID":"0103da10-320d-4303-8498-e0f06d9e97f4","Type":"ContainerStarted","Data":"35dcc58b65c58aaef94a66c77b931f39dd90c966e989d3f64f92cf41566bb9fd"} Dec 01 15:13:01 crc kubenswrapper[4637]: I1201 15:13:01.765642 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vffzp" event={"ID":"53487631-0885-4b5f-8421-8d5faeca466d","Type":"ContainerStarted","Data":"f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1"} Dec 01 15:13:02 crc kubenswrapper[4637]: I1201 15:13:02.780470 4637 generic.go:334] "Generic (PLEG): container finished" podID="53487631-0885-4b5f-8421-8d5faeca466d" containerID="f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1" exitCode=0 Dec 01 15:13:02 crc kubenswrapper[4637]: I1201 15:13:02.780565 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vffzp" event={"ID":"53487631-0885-4b5f-8421-8d5faeca466d","Type":"ContainerDied","Data":"f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1"} Dec 01 15:13:03 crc kubenswrapper[4637]: I1201 15:13:03.802922 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" event={"ID":"0103da10-320d-4303-8498-e0f06d9e97f4","Type":"ContainerStarted","Data":"f7e2e02cad9357d670f37d2dc5e93b9f06d37308de6903b96764a03df81a024c"} Dec 01 15:13:03 crc kubenswrapper[4637]: I1201 15:13:03.851223 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" podStartSLOduration=3.583959004 podStartE2EDuration="4.851188737s" podCreationTimestamp="2025-12-01 15:12:59 +0000 UTC" firstStartedPulling="2025-12-01 15:13:00.821737827 +0000 UTC m=+1631.339446655" lastFinishedPulling="2025-12-01 15:13:02.08896756 +0000 UTC m=+1632.606676388" observedRunningTime="2025-12-01 15:13:03.845590007 +0000 UTC m=+1634.363298825" watchObservedRunningTime="2025-12-01 15:13:03.851188737 +0000 UTC m=+1634.368897585" Dec 01 15:13:04 crc kubenswrapper[4637]: I1201 15:13:04.825190 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vffzp" event={"ID":"53487631-0885-4b5f-8421-8d5faeca466d","Type":"ContainerStarted","Data":"40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba"} Dec 01 15:13:05 crc kubenswrapper[4637]: I1201 15:13:05.202818 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:13:05 crc kubenswrapper[4637]: I1201 15:13:05.227285 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vffzp" podStartSLOduration=4.123011868 podStartE2EDuration="8.227266954s" podCreationTimestamp="2025-12-01 15:12:57 +0000 UTC" firstStartedPulling="2025-12-01 15:12:59.744991279 +0000 UTC m=+1630.262700117" lastFinishedPulling="2025-12-01 15:13:03.849246375 +0000 UTC m=+1634.366955203" observedRunningTime="2025-12-01 15:13:04.850981228 +0000 UTC m=+1635.368690066" watchObservedRunningTime="2025-12-01 15:13:05.227266954 +0000 UTC m=+1635.744975782" Dec 01 15:13:05 crc kubenswrapper[4637]: I1201 15:13:05.249822 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q25vd" Dec 01 15:13:05 crc kubenswrapper[4637]: I1201 15:13:05.606487 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q25vd"] Dec 01 15:13:05 crc kubenswrapper[4637]: I1201 15:13:05.771727 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:13:05 crc kubenswrapper[4637]: E1201 15:13:05.772329 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:13:05 crc kubenswrapper[4637]: I1201 15:13:05.787098 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7hh95"] Dec 01 15:13:05 crc kubenswrapper[4637]: I1201 15:13:05.787393 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7hh95" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerName="registry-server" containerID="cri-o://63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb" gracePeriod=2 Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.307335 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hh95" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.360912 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-catalog-content\") pod \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.361101 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9nd6\" (UniqueName: \"kubernetes.io/projected/b4e2ec35-5514-4bff-9b36-d8d58563ca44-kube-api-access-c9nd6\") pod \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.361129 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-utilities\") pod \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\" (UID: \"b4e2ec35-5514-4bff-9b36-d8d58563ca44\") " Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.362114 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-utilities" (OuterVolumeSpecName: "utilities") pod "b4e2ec35-5514-4bff-9b36-d8d58563ca44" (UID: "b4e2ec35-5514-4bff-9b36-d8d58563ca44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.374549 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e2ec35-5514-4bff-9b36-d8d58563ca44-kube-api-access-c9nd6" (OuterVolumeSpecName: "kube-api-access-c9nd6") pod "b4e2ec35-5514-4bff-9b36-d8d58563ca44" (UID: "b4e2ec35-5514-4bff-9b36-d8d58563ca44"). InnerVolumeSpecName "kube-api-access-c9nd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.425564 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4e2ec35-5514-4bff-9b36-d8d58563ca44" (UID: "b4e2ec35-5514-4bff-9b36-d8d58563ca44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.462588 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9nd6\" (UniqueName: \"kubernetes.io/projected/b4e2ec35-5514-4bff-9b36-d8d58563ca44-kube-api-access-c9nd6\") on node \"crc\" DevicePath \"\"" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.462625 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.462638 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e2ec35-5514-4bff-9b36-d8d58563ca44-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.876022 4637 generic.go:334] "Generic (PLEG): container finished" podID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerID="63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb" exitCode=0 Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.876405 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hh95" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.876827 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hh95" event={"ID":"b4e2ec35-5514-4bff-9b36-d8d58563ca44","Type":"ContainerDied","Data":"63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb"} Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.876862 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hh95" event={"ID":"b4e2ec35-5514-4bff-9b36-d8d58563ca44","Type":"ContainerDied","Data":"dcbfc42069a834099214fe2943b05cc6e9f2a889e53c5cc23ac2a8caad442166"} Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.876880 4637 scope.go:117] "RemoveContainer" containerID="63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.922162 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7hh95"] Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.933052 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7hh95"] Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.961042 4637 scope.go:117] "RemoveContainer" containerID="ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6" Dec 01 15:13:06 crc kubenswrapper[4637]: I1201 15:13:06.990036 4637 scope.go:117] "RemoveContainer" containerID="51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.030284 4637 scope.go:117] "RemoveContainer" containerID="63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb" Dec 01 15:13:07 crc kubenswrapper[4637]: E1201 15:13:07.030915 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb\": container with ID starting with 63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb not found: ID does not exist" containerID="63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.030992 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb"} err="failed to get container status \"63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb\": rpc error: code = NotFound desc = could not find container \"63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb\": container with ID starting with 63310e63ba9f9c93f0055e817402efc44b4fd285235e9d4ad10170df7930afdb not found: ID does not exist" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.031037 4637 scope.go:117] "RemoveContainer" containerID="ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6" Dec 01 15:13:07 crc kubenswrapper[4637]: E1201 15:13:07.033755 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6\": container with ID starting with ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6 not found: ID does not exist" containerID="ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.033830 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6"} err="failed to get container status \"ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6\": rpc error: code = NotFound desc = could not find container \"ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6\": container with ID starting with ec80026ef7422091765ea0ff62b6b7db700559dfcecb84a255a9b5570cf261d6 not found: ID does not exist" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.033866 4637 scope.go:117] "RemoveContainer" containerID="51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4" Dec 01 15:13:07 crc kubenswrapper[4637]: E1201 15:13:07.034262 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4\": container with ID starting with 51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4 not found: ID does not exist" containerID="51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.034306 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4"} err="failed to get container status \"51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4\": rpc error: code = NotFound desc = could not find container \"51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4\": container with ID starting with 51cbdd680ea7056b3ffd1a0db7355febd8bb5e94fbff112023136d423c0b2df4 not found: ID does not exist" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.783414 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" path="/var/lib/kubelet/pods/b4e2ec35-5514-4bff-9b36-d8d58563ca44/volumes" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.935591 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:13:07 crc kubenswrapper[4637]: I1201 15:13:07.935662 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:13:08 crc kubenswrapper[4637]: I1201 15:13:08.018824 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:13:17 crc kubenswrapper[4637]: I1201 15:13:17.979854 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:13:18 crc kubenswrapper[4637]: I1201 15:13:18.040534 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vffzp"] Dec 01 15:13:18 crc kubenswrapper[4637]: I1201 15:13:18.771697 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:13:18 crc kubenswrapper[4637]: E1201 15:13:18.772045 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:13:18 crc kubenswrapper[4637]: I1201 15:13:18.993996 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vffzp" podUID="53487631-0885-4b5f-8421-8d5faeca466d" containerName="registry-server" containerID="cri-o://40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba" gracePeriod=2 Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.437604 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.626584 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-utilities\") pod \"53487631-0885-4b5f-8421-8d5faeca466d\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.627043 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-catalog-content\") pod \"53487631-0885-4b5f-8421-8d5faeca466d\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.627199 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxl2k\" (UniqueName: \"kubernetes.io/projected/53487631-0885-4b5f-8421-8d5faeca466d-kube-api-access-dxl2k\") pod \"53487631-0885-4b5f-8421-8d5faeca466d\" (UID: \"53487631-0885-4b5f-8421-8d5faeca466d\") " Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.627629 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-utilities" (OuterVolumeSpecName: "utilities") pod "53487631-0885-4b5f-8421-8d5faeca466d" (UID: "53487631-0885-4b5f-8421-8d5faeca466d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.628504 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.634814 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53487631-0885-4b5f-8421-8d5faeca466d-kube-api-access-dxl2k" (OuterVolumeSpecName: "kube-api-access-dxl2k") pod "53487631-0885-4b5f-8421-8d5faeca466d" (UID: "53487631-0885-4b5f-8421-8d5faeca466d"). InnerVolumeSpecName "kube-api-access-dxl2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.645858 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53487631-0885-4b5f-8421-8d5faeca466d" (UID: "53487631-0885-4b5f-8421-8d5faeca466d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.729448 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53487631-0885-4b5f-8421-8d5faeca466d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:13:19 crc kubenswrapper[4637]: I1201 15:13:19.729484 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxl2k\" (UniqueName: \"kubernetes.io/projected/53487631-0885-4b5f-8421-8d5faeca466d-kube-api-access-dxl2k\") on node \"crc\" DevicePath \"\"" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.004496 4637 generic.go:334] "Generic (PLEG): container finished" podID="53487631-0885-4b5f-8421-8d5faeca466d" containerID="40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba" exitCode=0 Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.004537 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vffzp" event={"ID":"53487631-0885-4b5f-8421-8d5faeca466d","Type":"ContainerDied","Data":"40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba"} Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.004577 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vffzp" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.004591 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vffzp" event={"ID":"53487631-0885-4b5f-8421-8d5faeca466d","Type":"ContainerDied","Data":"de275c59218d9895a923c469c229d58f14ae867c9833dca3abe81cbcfae4adeb"} Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.004612 4637 scope.go:117] "RemoveContainer" containerID="40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.029798 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vffzp"] Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.041233 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vffzp"] Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.046467 4637 scope.go:117] "RemoveContainer" containerID="f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.074074 4637 scope.go:117] "RemoveContainer" containerID="e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.120416 4637 scope.go:117] "RemoveContainer" containerID="40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba" Dec 01 15:13:20 crc kubenswrapper[4637]: E1201 15:13:20.120839 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba\": container with ID starting with 40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba not found: ID does not exist" containerID="40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.120885 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba"} err="failed to get container status \"40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba\": rpc error: code = NotFound desc = could not find container \"40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba\": container with ID starting with 40ddff063f5d365bed53dafec34b4813b22f4d88d61b59abd21833982a2aaaba not found: ID does not exist" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.120912 4637 scope.go:117] "RemoveContainer" containerID="f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1" Dec 01 15:13:20 crc kubenswrapper[4637]: E1201 15:13:20.121440 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1\": container with ID starting with f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1 not found: ID does not exist" containerID="f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.121463 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1"} err="failed to get container status \"f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1\": rpc error: code = NotFound desc = could not find container \"f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1\": container with ID starting with f718a238254f065f74177e257f614dbb26979f1c6902553dc8594fe44abf02c1 not found: ID does not exist" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.121482 4637 scope.go:117] "RemoveContainer" containerID="e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d" Dec 01 15:13:20 crc kubenswrapper[4637]: E1201 15:13:20.121829 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d\": container with ID starting with e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d not found: ID does not exist" containerID="e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d" Dec 01 15:13:20 crc kubenswrapper[4637]: I1201 15:13:20.121864 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d"} err="failed to get container status \"e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d\": rpc error: code = NotFound desc = could not find container \"e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d\": container with ID starting with e6fb503bd984b5d377237205d885215c41abc7f18df7c2ef566b580551ef097d not found: ID does not exist" Dec 01 15:13:21 crc kubenswrapper[4637]: I1201 15:13:21.782036 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53487631-0885-4b5f-8421-8d5faeca466d" path="/var/lib/kubelet/pods/53487631-0885-4b5f-8421-8d5faeca466d/volumes" Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.037197 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6pfn9"] Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.050521 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-m6xt2"] Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.067871 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dbjxn"] Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.071630 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6pfn9"] Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.080260 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-m6xt2"] Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.095626 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dbjxn"] Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.783585 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f0c2216-424c-4d29-948c-90418de8b7aa" path="/var/lib/kubelet/pods/0f0c2216-424c-4d29-948c-90418de8b7aa/volumes" Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.784687 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fcf440-e6a8-4aea-af11-c65be59ddd4b" path="/var/lib/kubelet/pods/c9fcf440-e6a8-4aea-af11-c65be59ddd4b/volumes" Dec 01 15:13:23 crc kubenswrapper[4637]: I1201 15:13:23.785492 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9ec1661-7b3b-46b7-844c-e0278d64bc38" path="/var/lib/kubelet/pods/d9ec1661-7b3b-46b7-844c-e0278d64bc38/volumes" Dec 01 15:13:29 crc kubenswrapper[4637]: I1201 15:13:29.782534 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:13:29 crc kubenswrapper[4637]: E1201 15:13:29.784415 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:13:31 crc kubenswrapper[4637]: I1201 15:13:31.040762 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4t992"] Dec 01 15:13:31 crc kubenswrapper[4637]: I1201 15:13:31.049536 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4t992"] Dec 01 15:13:31 crc kubenswrapper[4637]: I1201 15:13:31.782594 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc717b2-b55e-4131-bd83-e041d4811607" path="/var/lib/kubelet/pods/1cc717b2-b55e-4131-bd83-e041d4811607/volumes" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.029143 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bhz5d"] Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.036955 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bhz5d"] Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.310785 4637 scope.go:117] "RemoveContainer" containerID="a8a7321851b2fcfb7d364af2d3a9b1f1fa91edf57dfa7a6259656306ddc25c15" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.347245 4637 scope.go:117] "RemoveContainer" containerID="a72484f80d650f7dd16d7a74a8de8628b186a014324d49699570cb3d6a91e19a" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.385166 4637 scope.go:117] "RemoveContainer" containerID="0ffa74d75fd30ab018d0b5d259c0c28bc2407c91fa05830953e4c28f9454e882" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.430653 4637 scope.go:117] "RemoveContainer" containerID="eedb9454cd7e20853be9614e10c0327b33f1539bea136c61241c41dd49349346" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.472265 4637 scope.go:117] "RemoveContainer" containerID="5838f6c0f4f9dd58ecad23da69b5b3369fd34bd961a1d5ddf33324acfe19ba5f" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.527585 4637 scope.go:117] "RemoveContainer" containerID="2aed1866c2396d79f75ebc786b2693962e14643e0e5c5847a1182b8fde5dee21" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.568307 4637 scope.go:117] "RemoveContainer" containerID="86ded1e2eb9eb3f45f32fe2db9695154b099b18a8c9e6697aa614a7d9aaf5f21" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.586305 4637 scope.go:117] "RemoveContainer" containerID="da463577d8b340b6b7256bb9068525a0c3a4b1008bfaf946ff5934c9dff416b6" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.603808 4637 scope.go:117] "RemoveContainer" containerID="aead16d51cd14f842b809d9d33b95a803a1fbf2fbf9841dfdde1c27d98e1a7f0" Dec 01 15:13:32 crc kubenswrapper[4637]: I1201 15:13:32.619445 4637 scope.go:117] "RemoveContainer" containerID="2241e385dd41df47f36238d2c5f7b9c7afecd216eb0f627979fbbcd052ba5fc4" Dec 01 15:13:33 crc kubenswrapper[4637]: I1201 15:13:33.821312 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909f05de-3353-479b-b281-41bdb6d455fd" path="/var/lib/kubelet/pods/909f05de-3353-479b-b281-41bdb6d455fd/volumes" Dec 01 15:13:38 crc kubenswrapper[4637]: I1201 15:13:38.040788 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f547-account-create-brb84"] Dec 01 15:13:38 crc kubenswrapper[4637]: I1201 15:13:38.055597 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f547-account-create-brb84"] Dec 01 15:13:39 crc kubenswrapper[4637]: I1201 15:13:39.031562 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a37f-account-create-cjk64"] Dec 01 15:13:39 crc kubenswrapper[4637]: I1201 15:13:39.043839 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cd96-account-create-kx994"] Dec 01 15:13:39 crc kubenswrapper[4637]: I1201 15:13:39.051595 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a37f-account-create-cjk64"] Dec 01 15:13:39 crc kubenswrapper[4637]: I1201 15:13:39.059326 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cd96-account-create-kx994"] Dec 01 15:13:39 crc kubenswrapper[4637]: I1201 15:13:39.782826 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c23c76-9613-401c-aff5-5ad572188e85" path="/var/lib/kubelet/pods/61c23c76-9613-401c-aff5-5ad572188e85/volumes" Dec 01 15:13:39 crc kubenswrapper[4637]: I1201 15:13:39.783946 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c657096-ed3a-4b49-a646-4ebad0261998" path="/var/lib/kubelet/pods/9c657096-ed3a-4b49-a646-4ebad0261998/volumes" Dec 01 15:13:39 crc kubenswrapper[4637]: I1201 15:13:39.784452 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2866484-72b3-4826-b45d-f015df568ee1" path="/var/lib/kubelet/pods/a2866484-72b3-4826-b45d-f015df568ee1/volumes" Dec 01 15:13:43 crc kubenswrapper[4637]: I1201 15:13:43.771827 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:13:43 crc kubenswrapper[4637]: E1201 15:13:43.772456 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:13:55 crc kubenswrapper[4637]: I1201 15:13:55.771643 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:13:55 crc kubenswrapper[4637]: E1201 15:13:55.772403 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:14:01 crc kubenswrapper[4637]: I1201 15:14:01.047056 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-sb6hr"] Dec 01 15:14:01 crc kubenswrapper[4637]: I1201 15:14:01.055184 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-sb6hr"] Dec 01 15:14:01 crc kubenswrapper[4637]: I1201 15:14:01.782211 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecec3227-52bd-4b05-83ac-90218117a222" path="/var/lib/kubelet/pods/ecec3227-52bd-4b05-83ac-90218117a222/volumes" Dec 01 15:14:08 crc kubenswrapper[4637]: I1201 15:14:08.771462 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:14:08 crc kubenswrapper[4637]: E1201 15:14:08.772082 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:14:10 crc kubenswrapper[4637]: I1201 15:14:10.036376 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5lxgt"] Dec 01 15:14:10 crc kubenswrapper[4637]: I1201 15:14:10.057500 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5lxgt"] Dec 01 15:14:11 crc kubenswrapper[4637]: I1201 15:14:11.782424 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb48315c-5146-4f1e-9d0f-e39186e54083" path="/var/lib/kubelet/pods/cb48315c-5146-4f1e-9d0f-e39186e54083/volumes" Dec 01 15:14:21 crc kubenswrapper[4637]: I1201 15:14:21.771905 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:14:21 crc kubenswrapper[4637]: E1201 15:14:21.772846 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:14:29 crc kubenswrapper[4637]: I1201 15:14:29.043970 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-x4gwb"] Dec 01 15:14:29 crc kubenswrapper[4637]: I1201 15:14:29.055003 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-x4gwb"] Dec 01 15:14:29 crc kubenswrapper[4637]: I1201 15:14:29.793329 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a1344d-109c-400f-ac50-60be5fed1255" path="/var/lib/kubelet/pods/31a1344d-109c-400f-ac50-60be5fed1255/volumes" Dec 01 15:14:32 crc kubenswrapper[4637]: I1201 15:14:32.772307 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:14:32 crc kubenswrapper[4637]: E1201 15:14:32.773283 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:14:32 crc kubenswrapper[4637]: I1201 15:14:32.828558 4637 scope.go:117] "RemoveContainer" containerID="d209f4062a14a78cd304c59f8d4191465a3bdbaef6e2e661fe6a33d7c9226ea8" Dec 01 15:14:32 crc kubenswrapper[4637]: I1201 15:14:32.889196 4637 scope.go:117] "RemoveContainer" containerID="df23eeac91b96a30fe792ded19bd9c38db8f91aa6239bd353c0af0344d902ee9" Dec 01 15:14:32 crc kubenswrapper[4637]: I1201 15:14:32.916762 4637 scope.go:117] "RemoveContainer" containerID="30b7bb7e9cf3a9282f5b75e2064539cc3e396330a0d917c63335e38bf91663f9" Dec 01 15:14:32 crc kubenswrapper[4637]: I1201 15:14:32.968433 4637 scope.go:117] "RemoveContainer" containerID="23ccf08a746215d892e4fd11542cc3e54b19e6516642cd4c1d601543bf54b8c0" Dec 01 15:14:33 crc kubenswrapper[4637]: I1201 15:14:33.026689 4637 scope.go:117] "RemoveContainer" containerID="ad46b8aa95ab5b0658152c889c4b65dfd93159108db3526530e4365c87f4e433" Dec 01 15:14:33 crc kubenswrapper[4637]: I1201 15:14:33.063363 4637 scope.go:117] "RemoveContainer" containerID="f3617393c1c8b917e13c2902e4f91b5d908beb6a1b5494113448ae691ce3aa8f" Dec 01 15:14:33 crc kubenswrapper[4637]: I1201 15:14:33.115893 4637 scope.go:117] "RemoveContainer" containerID="b5b22f55974133b109af48d3e32c860a4cf1a0e9836c6dfbb8acc43cc7b8a4d3" Dec 01 15:14:39 crc kubenswrapper[4637]: I1201 15:14:39.028812 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zc7pv"] Dec 01 15:14:39 crc kubenswrapper[4637]: I1201 15:14:39.038219 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zc7pv"] Dec 01 15:14:39 crc kubenswrapper[4637]: I1201 15:14:39.787999 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7137da0-10ce-4ac0-8e2e-658247d8c0b7" path="/var/lib/kubelet/pods/a7137da0-10ce-4ac0-8e2e-658247d8c0b7/volumes" Dec 01 15:14:43 crc kubenswrapper[4637]: I1201 15:14:43.771201 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:14:43 crc kubenswrapper[4637]: E1201 15:14:43.771766 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:14:47 crc kubenswrapper[4637]: I1201 15:14:47.806518 4637 generic.go:334] "Generic (PLEG): container finished" podID="0103da10-320d-4303-8498-e0f06d9e97f4" containerID="f7e2e02cad9357d670f37d2dc5e93b9f06d37308de6903b96764a03df81a024c" exitCode=0 Dec 01 15:14:47 crc kubenswrapper[4637]: I1201 15:14:47.806614 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" event={"ID":"0103da10-320d-4303-8498-e0f06d9e97f4","Type":"ContainerDied","Data":"f7e2e02cad9357d670f37d2dc5e93b9f06d37308de6903b96764a03df81a024c"} Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.225004 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.395023 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-ssh-key\") pod \"0103da10-320d-4303-8498-e0f06d9e97f4\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.395092 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qllb\" (UniqueName: \"kubernetes.io/projected/0103da10-320d-4303-8498-e0f06d9e97f4-kube-api-access-6qllb\") pod \"0103da10-320d-4303-8498-e0f06d9e97f4\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.395262 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-inventory\") pod \"0103da10-320d-4303-8498-e0f06d9e97f4\" (UID: \"0103da10-320d-4303-8498-e0f06d9e97f4\") " Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.406201 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0103da10-320d-4303-8498-e0f06d9e97f4-kube-api-access-6qllb" (OuterVolumeSpecName: "kube-api-access-6qllb") pod "0103da10-320d-4303-8498-e0f06d9e97f4" (UID: "0103da10-320d-4303-8498-e0f06d9e97f4"). InnerVolumeSpecName "kube-api-access-6qllb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.427309 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-inventory" (OuterVolumeSpecName: "inventory") pod "0103da10-320d-4303-8498-e0f06d9e97f4" (UID: "0103da10-320d-4303-8498-e0f06d9e97f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.428661 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0103da10-320d-4303-8498-e0f06d9e97f4" (UID: "0103da10-320d-4303-8498-e0f06d9e97f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.497590 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.497624 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qllb\" (UniqueName: \"kubernetes.io/projected/0103da10-320d-4303-8498-e0f06d9e97f4-kube-api-access-6qllb\") on node \"crc\" DevicePath \"\"" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.497634 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0103da10-320d-4303-8498-e0f06d9e97f4-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.824259 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" event={"ID":"0103da10-320d-4303-8498-e0f06d9e97f4","Type":"ContainerDied","Data":"35dcc58b65c58aaef94a66c77b931f39dd90c966e989d3f64f92cf41566bb9fd"} Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.824309 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35dcc58b65c58aaef94a66c77b931f39dd90c966e989d3f64f92cf41566bb9fd" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.824314 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ph98p" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.943431 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5"] Dec 01 15:14:49 crc kubenswrapper[4637]: E1201 15:14:49.944010 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerName="extract-utilities" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944032 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerName="extract-utilities" Dec 01 15:14:49 crc kubenswrapper[4637]: E1201 15:14:49.944047 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerName="registry-server" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944055 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerName="registry-server" Dec 01 15:14:49 crc kubenswrapper[4637]: E1201 15:14:49.944089 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerName="extract-content" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944097 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerName="extract-content" Dec 01 15:14:49 crc kubenswrapper[4637]: E1201 15:14:49.944121 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53487631-0885-4b5f-8421-8d5faeca466d" containerName="registry-server" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944128 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="53487631-0885-4b5f-8421-8d5faeca466d" containerName="registry-server" Dec 01 15:14:49 crc kubenswrapper[4637]: E1201 15:14:49.944157 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0103da10-320d-4303-8498-e0f06d9e97f4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944166 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="0103da10-320d-4303-8498-e0f06d9e97f4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 15:14:49 crc kubenswrapper[4637]: E1201 15:14:49.944183 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53487631-0885-4b5f-8421-8d5faeca466d" containerName="extract-utilities" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944190 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="53487631-0885-4b5f-8421-8d5faeca466d" containerName="extract-utilities" Dec 01 15:14:49 crc kubenswrapper[4637]: E1201 15:14:49.944209 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53487631-0885-4b5f-8421-8d5faeca466d" containerName="extract-content" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944217 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="53487631-0885-4b5f-8421-8d5faeca466d" containerName="extract-content" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944480 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e2ec35-5514-4bff-9b36-d8d58563ca44" containerName="registry-server" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944515 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="0103da10-320d-4303-8498-e0f06d9e97f4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.944527 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="53487631-0885-4b5f-8421-8d5faeca466d" containerName="registry-server" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.945784 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.948447 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.949159 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.949526 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.949662 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:14:49 crc kubenswrapper[4637]: I1201 15:14:49.969110 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5"] Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.124237 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.124665 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.124996 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb8bm\" (UniqueName: \"kubernetes.io/projected/8152b193-b04b-4380-9596-60c61cc82ef7-kube-api-access-xb8bm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.228321 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.229061 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb8bm\" (UniqueName: \"kubernetes.io/projected/8152b193-b04b-4380-9596-60c61cc82ef7-kube-api-access-xb8bm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.229538 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.235750 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.235845 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.251696 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb8bm\" (UniqueName: \"kubernetes.io/projected/8152b193-b04b-4380-9596-60c61cc82ef7-kube-api-access-xb8bm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-khhg5\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.281679 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:14:50 crc kubenswrapper[4637]: I1201 15:14:50.867766 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5"] Dec 01 15:14:51 crc kubenswrapper[4637]: I1201 15:14:51.851648 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" event={"ID":"8152b193-b04b-4380-9596-60c61cc82ef7","Type":"ContainerStarted","Data":"749ed7475eee0ca8211c4729d6836188e69a68c011a4459c8c26a5bd40c3ea6d"} Dec 01 15:14:51 crc kubenswrapper[4637]: I1201 15:14:51.852121 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" event={"ID":"8152b193-b04b-4380-9596-60c61cc82ef7","Type":"ContainerStarted","Data":"df8b2cda4f2200649bcfc199551a562540eb29e86a952f8f62246de38443e8da"} Dec 01 15:14:51 crc kubenswrapper[4637]: I1201 15:14:51.880604 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" podStartSLOduration=2.25024663 podStartE2EDuration="2.880582379s" podCreationTimestamp="2025-12-01 15:14:49 +0000 UTC" firstStartedPulling="2025-12-01 15:14:50.888276879 +0000 UTC m=+1741.405985707" lastFinishedPulling="2025-12-01 15:14:51.518612638 +0000 UTC m=+1742.036321456" observedRunningTime="2025-12-01 15:14:51.868978348 +0000 UTC m=+1742.386687176" watchObservedRunningTime="2025-12-01 15:14:51.880582379 +0000 UTC m=+1742.398291207" Dec 01 15:14:57 crc kubenswrapper[4637]: I1201 15:14:57.771730 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:14:57 crc kubenswrapper[4637]: E1201 15:14:57.773636 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.153242 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l"] Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.155978 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.158791 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.169379 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.181197 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l"] Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.257969 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg79h\" (UniqueName: \"kubernetes.io/projected/fdc94dbe-bca6-479b-abce-c8cea50461dc-kube-api-access-dg79h\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.258191 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdc94dbe-bca6-479b-abce-c8cea50461dc-config-volume\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.258224 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdc94dbe-bca6-479b-abce-c8cea50461dc-secret-volume\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.361452 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdc94dbe-bca6-479b-abce-c8cea50461dc-config-volume\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.361624 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdc94dbe-bca6-479b-abce-c8cea50461dc-secret-volume\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.361800 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg79h\" (UniqueName: \"kubernetes.io/projected/fdc94dbe-bca6-479b-abce-c8cea50461dc-kube-api-access-dg79h\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.365571 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdc94dbe-bca6-479b-abce-c8cea50461dc-config-volume\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.387976 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg79h\" (UniqueName: \"kubernetes.io/projected/fdc94dbe-bca6-479b-abce-c8cea50461dc-kube-api-access-dg79h\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.389827 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdc94dbe-bca6-479b-abce-c8cea50461dc-secret-volume\") pod \"collect-profiles-29410035-7zj7l\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.486860 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.937118 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l"] Dec 01 15:15:00 crc kubenswrapper[4637]: I1201 15:15:00.959667 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" event={"ID":"fdc94dbe-bca6-479b-abce-c8cea50461dc","Type":"ContainerStarted","Data":"590a52510c03ee13cb75287c64b22cc1f87ce6cc6641b085715e78e2ad7f47b4"} Dec 01 15:15:01 crc kubenswrapper[4637]: I1201 15:15:01.985831 4637 generic.go:334] "Generic (PLEG): container finished" podID="fdc94dbe-bca6-479b-abce-c8cea50461dc" containerID="2db3b0c1fa83b2dec172de69a5f7cfcc064370df53edb4031375297546939548" exitCode=0 Dec 01 15:15:01 crc kubenswrapper[4637]: I1201 15:15:01.986184 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" event={"ID":"fdc94dbe-bca6-479b-abce-c8cea50461dc","Type":"ContainerDied","Data":"2db3b0c1fa83b2dec172de69a5f7cfcc064370df53edb4031375297546939548"} Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.330946 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.436774 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdc94dbe-bca6-479b-abce-c8cea50461dc-secret-volume\") pod \"fdc94dbe-bca6-479b-abce-c8cea50461dc\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.437197 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdc94dbe-bca6-479b-abce-c8cea50461dc-config-volume\") pod \"fdc94dbe-bca6-479b-abce-c8cea50461dc\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.437388 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg79h\" (UniqueName: \"kubernetes.io/projected/fdc94dbe-bca6-479b-abce-c8cea50461dc-kube-api-access-dg79h\") pod \"fdc94dbe-bca6-479b-abce-c8cea50461dc\" (UID: \"fdc94dbe-bca6-479b-abce-c8cea50461dc\") " Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.438762 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc94dbe-bca6-479b-abce-c8cea50461dc-config-volume" (OuterVolumeSpecName: "config-volume") pod "fdc94dbe-bca6-479b-abce-c8cea50461dc" (UID: "fdc94dbe-bca6-479b-abce-c8cea50461dc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.444396 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc94dbe-bca6-479b-abce-c8cea50461dc-kube-api-access-dg79h" (OuterVolumeSpecName: "kube-api-access-dg79h") pod "fdc94dbe-bca6-479b-abce-c8cea50461dc" (UID: "fdc94dbe-bca6-479b-abce-c8cea50461dc"). InnerVolumeSpecName "kube-api-access-dg79h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.444481 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc94dbe-bca6-479b-abce-c8cea50461dc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fdc94dbe-bca6-479b-abce-c8cea50461dc" (UID: "fdc94dbe-bca6-479b-abce-c8cea50461dc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.539713 4637 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdc94dbe-bca6-479b-abce-c8cea50461dc-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.540151 4637 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdc94dbe-bca6-479b-abce-c8cea50461dc-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 15:15:03 crc kubenswrapper[4637]: I1201 15:15:03.540163 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg79h\" (UniqueName: \"kubernetes.io/projected/fdc94dbe-bca6-479b-abce-c8cea50461dc-kube-api-access-dg79h\") on node \"crc\" DevicePath \"\"" Dec 01 15:15:04 crc kubenswrapper[4637]: I1201 15:15:04.008116 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" event={"ID":"fdc94dbe-bca6-479b-abce-c8cea50461dc","Type":"ContainerDied","Data":"590a52510c03ee13cb75287c64b22cc1f87ce6cc6641b085715e78e2ad7f47b4"} Dec 01 15:15:04 crc kubenswrapper[4637]: I1201 15:15:04.008173 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590a52510c03ee13cb75287c64b22cc1f87ce6cc6641b085715e78e2ad7f47b4" Dec 01 15:15:04 crc kubenswrapper[4637]: I1201 15:15:04.009102 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l" Dec 01 15:15:10 crc kubenswrapper[4637]: I1201 15:15:10.771394 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:15:10 crc kubenswrapper[4637]: E1201 15:15:10.772014 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:15:15 crc kubenswrapper[4637]: I1201 15:15:15.042998 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vlwgj"] Dec 01 15:15:15 crc kubenswrapper[4637]: I1201 15:15:15.053971 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vlwgj"] Dec 01 15:15:15 crc kubenswrapper[4637]: I1201 15:15:15.781584 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ffcaf0-9c6e-4f8e-98a0-b9c44529527d" path="/var/lib/kubelet/pods/75ffcaf0-9c6e-4f8e-98a0-b9c44529527d/volumes" Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.037268 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6k8ct"] Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.047528 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-g96w2"] Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.060529 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-25b5n"] Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.068226 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-g96w2"] Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.074754 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-25b5n"] Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.081054 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6k8ct"] Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.783028 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ceb774a-fe03-4b01-a371-d0a8b10e6b3d" path="/var/lib/kubelet/pods/1ceb774a-fe03-4b01-a371-d0a8b10e6b3d/volumes" Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.783795 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44eb4d02-ab26-4592-9b34-698fe46a8c51" path="/var/lib/kubelet/pods/44eb4d02-ab26-4592-9b34-698fe46a8c51/volumes" Dec 01 15:15:19 crc kubenswrapper[4637]: I1201 15:15:19.784295 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6ab34a-72f6-4908-a148-e08b308afb1a" path="/var/lib/kubelet/pods/7e6ab34a-72f6-4908-a148-e08b308afb1a/volumes" Dec 01 15:15:25 crc kubenswrapper[4637]: I1201 15:15:25.771279 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:15:25 crc kubenswrapper[4637]: E1201 15:15:25.772257 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.047728 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9f3a-account-create-thgql"] Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.060109 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-53ce-account-create-v7kbp"] Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.070134 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4785-account-create-88s8h"] Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.078316 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-53ce-account-create-v7kbp"] Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.086235 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9f3a-account-create-thgql"] Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.093807 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4785-account-create-88s8h"] Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.796762 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152d8240-ac76-4db8-b611-e1a3e62a91c6" path="/var/lib/kubelet/pods/152d8240-ac76-4db8-b611-e1a3e62a91c6/volumes" Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.798182 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883863aa-96fa-4ca9-b354-31239ab536cc" path="/var/lib/kubelet/pods/883863aa-96fa-4ca9-b354-31239ab536cc/volumes" Dec 01 15:15:27 crc kubenswrapper[4637]: I1201 15:15:27.800110 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51c2122-e195-4811-b65a-677e548f80ea" path="/var/lib/kubelet/pods/d51c2122-e195-4811-b65a-677e548f80ea/volumes" Dec 01 15:15:33 crc kubenswrapper[4637]: I1201 15:15:33.285682 4637 scope.go:117] "RemoveContainer" containerID="358bb6428ff64f29600393815078fca71afd2d0d5037d1c704ece89828dacea1" Dec 01 15:15:33 crc kubenswrapper[4637]: I1201 15:15:33.328243 4637 scope.go:117] "RemoveContainer" containerID="bac74301707959187b020c731449d47d76995b9b889bf09b7d75d86b7d79eb64" Dec 01 15:15:33 crc kubenswrapper[4637]: I1201 15:15:33.365142 4637 scope.go:117] "RemoveContainer" containerID="6d8a29b4e4e8cbb560dfff820c172d5a60de4077c174cbff8cbb05a068fc28af" Dec 01 15:15:33 crc kubenswrapper[4637]: I1201 15:15:33.405716 4637 scope.go:117] "RemoveContainer" containerID="38274d56c7e6f98509e46809c04207b9585746ab7c01597e6026bb1a03374e61" Dec 01 15:15:33 crc kubenswrapper[4637]: I1201 15:15:33.449988 4637 scope.go:117] "RemoveContainer" containerID="7f132f75ca313511f7e2de9532e9d8538b2c463379d270be61033ae1bdc00b40" Dec 01 15:15:33 crc kubenswrapper[4637]: I1201 15:15:33.511695 4637 scope.go:117] "RemoveContainer" containerID="a2697d3eb84cef1935a77452f8f50e75a2a9834c27e819a3a2a15fcb2ba190a0" Dec 01 15:15:33 crc kubenswrapper[4637]: I1201 15:15:33.546470 4637 scope.go:117] "RemoveContainer" containerID="dd22e78fc9b73287b13f9364a578a07bf6ab507a5f3023e5474689f85cc9ce2f" Dec 01 15:15:33 crc kubenswrapper[4637]: I1201 15:15:33.564080 4637 scope.go:117] "RemoveContainer" containerID="a340851ca1700ebddd37a2f7dded0b5519a9d28e492139ee8ebabda9298898b1" Dec 01 15:15:36 crc kubenswrapper[4637]: I1201 15:15:36.771600 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:15:36 crc kubenswrapper[4637]: E1201 15:15:36.772605 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:15:49 crc kubenswrapper[4637]: I1201 15:15:49.785807 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:15:49 crc kubenswrapper[4637]: E1201 15:15:49.786479 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:16:03 crc kubenswrapper[4637]: I1201 15:16:03.771344 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:16:03 crc kubenswrapper[4637]: E1201 15:16:03.772028 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:16:04 crc kubenswrapper[4637]: I1201 15:16:04.052678 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ddhlw"] Dec 01 15:16:04 crc kubenswrapper[4637]: I1201 15:16:04.059888 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ddhlw"] Dec 01 15:16:05 crc kubenswrapper[4637]: I1201 15:16:05.782369 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525ed4f2-f6a1-436b-8119-cf3fc620c6a7" path="/var/lib/kubelet/pods/525ed4f2-f6a1-436b-8119-cf3fc620c6a7/volumes" Dec 01 15:16:09 crc kubenswrapper[4637]: I1201 15:16:09.581142 4637 generic.go:334] "Generic (PLEG): container finished" podID="8152b193-b04b-4380-9596-60c61cc82ef7" containerID="749ed7475eee0ca8211c4729d6836188e69a68c011a4459c8c26a5bd40c3ea6d" exitCode=0 Dec 01 15:16:09 crc kubenswrapper[4637]: I1201 15:16:09.581391 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" event={"ID":"8152b193-b04b-4380-9596-60c61cc82ef7","Type":"ContainerDied","Data":"749ed7475eee0ca8211c4729d6836188e69a68c011a4459c8c26a5bd40c3ea6d"} Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.010323 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.096647 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-inventory\") pod \"8152b193-b04b-4380-9596-60c61cc82ef7\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.096977 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb8bm\" (UniqueName: \"kubernetes.io/projected/8152b193-b04b-4380-9596-60c61cc82ef7-kube-api-access-xb8bm\") pod \"8152b193-b04b-4380-9596-60c61cc82ef7\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.097058 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-ssh-key\") pod \"8152b193-b04b-4380-9596-60c61cc82ef7\" (UID: \"8152b193-b04b-4380-9596-60c61cc82ef7\") " Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.103422 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8152b193-b04b-4380-9596-60c61cc82ef7-kube-api-access-xb8bm" (OuterVolumeSpecName: "kube-api-access-xb8bm") pod "8152b193-b04b-4380-9596-60c61cc82ef7" (UID: "8152b193-b04b-4380-9596-60c61cc82ef7"). InnerVolumeSpecName "kube-api-access-xb8bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.131645 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8152b193-b04b-4380-9596-60c61cc82ef7" (UID: "8152b193-b04b-4380-9596-60c61cc82ef7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.135091 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-inventory" (OuterVolumeSpecName: "inventory") pod "8152b193-b04b-4380-9596-60c61cc82ef7" (UID: "8152b193-b04b-4380-9596-60c61cc82ef7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.198911 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.198952 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8152b193-b04b-4380-9596-60c61cc82ef7-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.198963 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb8bm\" (UniqueName: \"kubernetes.io/projected/8152b193-b04b-4380-9596-60c61cc82ef7-kube-api-access-xb8bm\") on node \"crc\" DevicePath \"\"" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.598814 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" event={"ID":"8152b193-b04b-4380-9596-60c61cc82ef7","Type":"ContainerDied","Data":"df8b2cda4f2200649bcfc199551a562540eb29e86a952f8f62246de38443e8da"} Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.598859 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df8b2cda4f2200649bcfc199551a562540eb29e86a952f8f62246de38443e8da" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.598883 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-khhg5" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.710103 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt"] Dec 01 15:16:11 crc kubenswrapper[4637]: E1201 15:16:11.710747 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8152b193-b04b-4380-9596-60c61cc82ef7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.710776 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8152b193-b04b-4380-9596-60c61cc82ef7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 15:16:11 crc kubenswrapper[4637]: E1201 15:16:11.710822 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc94dbe-bca6-479b-abce-c8cea50461dc" containerName="collect-profiles" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.710832 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc94dbe-bca6-479b-abce-c8cea50461dc" containerName="collect-profiles" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.711142 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="8152b193-b04b-4380-9596-60c61cc82ef7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.711181 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc94dbe-bca6-479b-abce-c8cea50461dc" containerName="collect-profiles" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.713369 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.717957 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.718156 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.718258 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.718276 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.723300 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt"] Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.811336 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.811810 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflm4\" (UniqueName: \"kubernetes.io/projected/7a9a0491-0639-431b-bf41-812e29d6f3b4-kube-api-access-kflm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.811856 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.913918 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.914550 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kflm4\" (UniqueName: \"kubernetes.io/projected/7a9a0491-0639-431b-bf41-812e29d6f3b4-kube-api-access-kflm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.914626 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.918855 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.932824 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:11 crc kubenswrapper[4637]: I1201 15:16:11.944639 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kflm4\" (UniqueName: \"kubernetes.io/projected/7a9a0491-0639-431b-bf41-812e29d6f3b4-kube-api-access-kflm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vppnt\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:12 crc kubenswrapper[4637]: I1201 15:16:12.043654 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:12 crc kubenswrapper[4637]: I1201 15:16:12.833402 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt"] Dec 01 15:16:13 crc kubenswrapper[4637]: I1201 15:16:13.620297 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" event={"ID":"7a9a0491-0639-431b-bf41-812e29d6f3b4","Type":"ContainerStarted","Data":"e84e1bfac12b75459fb651dd7001b253c3cfb249978957792cadd26a99ca52dc"} Dec 01 15:16:13 crc kubenswrapper[4637]: I1201 15:16:13.620846 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" event={"ID":"7a9a0491-0639-431b-bf41-812e29d6f3b4","Type":"ContainerStarted","Data":"162dfc4f3b6e6fac264828254ba226ae4968832782839ce4bc2b025d6d29a9bd"} Dec 01 15:16:14 crc kubenswrapper[4637]: I1201 15:16:14.771191 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:16:14 crc kubenswrapper[4637]: E1201 15:16:14.771830 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:16:19 crc kubenswrapper[4637]: I1201 15:16:19.673416 4637 generic.go:334] "Generic (PLEG): container finished" podID="7a9a0491-0639-431b-bf41-812e29d6f3b4" containerID="e84e1bfac12b75459fb651dd7001b253c3cfb249978957792cadd26a99ca52dc" exitCode=0 Dec 01 15:16:19 crc kubenswrapper[4637]: I1201 15:16:19.673496 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" event={"ID":"7a9a0491-0639-431b-bf41-812e29d6f3b4","Type":"ContainerDied","Data":"e84e1bfac12b75459fb651dd7001b253c3cfb249978957792cadd26a99ca52dc"} Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.133367 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.197924 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-ssh-key\") pod \"7a9a0491-0639-431b-bf41-812e29d6f3b4\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.198106 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-inventory\") pod \"7a9a0491-0639-431b-bf41-812e29d6f3b4\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.198138 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kflm4\" (UniqueName: \"kubernetes.io/projected/7a9a0491-0639-431b-bf41-812e29d6f3b4-kube-api-access-kflm4\") pod \"7a9a0491-0639-431b-bf41-812e29d6f3b4\" (UID: \"7a9a0491-0639-431b-bf41-812e29d6f3b4\") " Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.206974 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9a0491-0639-431b-bf41-812e29d6f3b4-kube-api-access-kflm4" (OuterVolumeSpecName: "kube-api-access-kflm4") pod "7a9a0491-0639-431b-bf41-812e29d6f3b4" (UID: "7a9a0491-0639-431b-bf41-812e29d6f3b4"). InnerVolumeSpecName "kube-api-access-kflm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.250416 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-inventory" (OuterVolumeSpecName: "inventory") pod "7a9a0491-0639-431b-bf41-812e29d6f3b4" (UID: "7a9a0491-0639-431b-bf41-812e29d6f3b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.250753 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7a9a0491-0639-431b-bf41-812e29d6f3b4" (UID: "7a9a0491-0639-431b-bf41-812e29d6f3b4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.301154 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.301248 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a9a0491-0639-431b-bf41-812e29d6f3b4-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.301263 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kflm4\" (UniqueName: \"kubernetes.io/projected/7a9a0491-0639-431b-bf41-812e29d6f3b4-kube-api-access-kflm4\") on node \"crc\" DevicePath \"\"" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.702537 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" event={"ID":"7a9a0491-0639-431b-bf41-812e29d6f3b4","Type":"ContainerDied","Data":"162dfc4f3b6e6fac264828254ba226ae4968832782839ce4bc2b025d6d29a9bd"} Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.702594 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162dfc4f3b6e6fac264828254ba226ae4968832782839ce4bc2b025d6d29a9bd" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.702687 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vppnt" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.840009 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p"] Dec 01 15:16:21 crc kubenswrapper[4637]: E1201 15:16:21.841140 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9a0491-0639-431b-bf41-812e29d6f3b4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.841168 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9a0491-0639-431b-bf41-812e29d6f3b4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.841647 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9a0491-0639-431b-bf41-812e29d6f3b4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.843092 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.846014 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.846301 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.846506 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.849691 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.858720 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p"] Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.918529 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.918826 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:21 crc kubenswrapper[4637]: I1201 15:16:21.919004 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlkv\" (UniqueName: \"kubernetes.io/projected/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-kube-api-access-bqlkv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.021068 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqlkv\" (UniqueName: \"kubernetes.io/projected/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-kube-api-access-bqlkv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.021187 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.021262 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.025771 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.026189 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.036771 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqlkv\" (UniqueName: \"kubernetes.io/projected/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-kube-api-access-bqlkv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jmj6p\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.184080 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.687570 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p"] Dec 01 15:16:22 crc kubenswrapper[4637]: I1201 15:16:22.711942 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" event={"ID":"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439","Type":"ContainerStarted","Data":"d44dfa851fcab4b883470171a552a3a604ca38091bf5f50f54322c4472d038bc"} Dec 01 15:16:23 crc kubenswrapper[4637]: I1201 15:16:23.725347 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" event={"ID":"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439","Type":"ContainerStarted","Data":"6b87b525c6631faa2130ecd37dc485d31073e1c7b5c9cb7c59299d22ed15d91f"} Dec 01 15:16:23 crc kubenswrapper[4637]: I1201 15:16:23.740168 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" podStartSLOduration=2.272932353 podStartE2EDuration="2.740150321s" podCreationTimestamp="2025-12-01 15:16:21 +0000 UTC" firstStartedPulling="2025-12-01 15:16:22.693866339 +0000 UTC m=+1833.211575167" lastFinishedPulling="2025-12-01 15:16:23.161084307 +0000 UTC m=+1833.678793135" observedRunningTime="2025-12-01 15:16:23.737306144 +0000 UTC m=+1834.255014972" watchObservedRunningTime="2025-12-01 15:16:23.740150321 +0000 UTC m=+1834.257859149" Dec 01 15:16:27 crc kubenswrapper[4637]: I1201 15:16:27.771376 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:16:27 crc kubenswrapper[4637]: E1201 15:16:27.773680 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:16:30 crc kubenswrapper[4637]: I1201 15:16:30.053032 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tq4rf"] Dec 01 15:16:30 crc kubenswrapper[4637]: I1201 15:16:30.062207 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-thknm"] Dec 01 15:16:30 crc kubenswrapper[4637]: I1201 15:16:30.075785 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-thknm"] Dec 01 15:16:30 crc kubenswrapper[4637]: I1201 15:16:30.083994 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tq4rf"] Dec 01 15:16:31 crc kubenswrapper[4637]: I1201 15:16:31.784055 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71511747-8af4-48e6-8d8e-65c200a23c34" path="/var/lib/kubelet/pods/71511747-8af4-48e6-8d8e-65c200a23c34/volumes" Dec 01 15:16:31 crc kubenswrapper[4637]: I1201 15:16:31.785818 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abff6ee-1b19-4555-9ba5-2f522f7b3d2c" path="/var/lib/kubelet/pods/9abff6ee-1b19-4555-9ba5-2f522f7b3d2c/volumes" Dec 01 15:16:33 crc kubenswrapper[4637]: I1201 15:16:33.701747 4637 scope.go:117] "RemoveContainer" containerID="5acfaaeeaa2f69aedd23bdfe290b869c6e44324a651880de234633292d0fc90e" Dec 01 15:16:33 crc kubenswrapper[4637]: I1201 15:16:33.760366 4637 scope.go:117] "RemoveContainer" containerID="1a169fbfbacbed0622d8ee77116e7c0c1c2b11f112cfa139eb69cd39052ce9f0" Dec 01 15:16:33 crc kubenswrapper[4637]: I1201 15:16:33.809307 4637 scope.go:117] "RemoveContainer" containerID="4c0b37344f846e257e75801a2714dd437589f9bce749321dd90ef3d8d6e40aa7" Dec 01 15:16:38 crc kubenswrapper[4637]: I1201 15:16:38.771706 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:16:38 crc kubenswrapper[4637]: E1201 15:16:38.772551 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:16:49 crc kubenswrapper[4637]: I1201 15:16:49.779234 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:16:50 crc kubenswrapper[4637]: I1201 15:16:50.999389 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"c5fd51a07a1e38b401f16a4fb3406e418b65fdf2fe7bd0db652e567f20de551e"} Dec 01 15:17:09 crc kubenswrapper[4637]: I1201 15:17:09.185901 4637 generic.go:334] "Generic (PLEG): container finished" podID="4144f6ad-f95f-4e2e-a9d3-003cdc5ef439" containerID="6b87b525c6631faa2130ecd37dc485d31073e1c7b5c9cb7c59299d22ed15d91f" exitCode=0 Dec 01 15:17:09 crc kubenswrapper[4637]: I1201 15:17:09.185969 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" event={"ID":"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439","Type":"ContainerDied","Data":"6b87b525c6631faa2130ecd37dc485d31073e1c7b5c9cb7c59299d22ed15d91f"} Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.668626 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.729054 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-ssh-key\") pod \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.729982 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqlkv\" (UniqueName: \"kubernetes.io/projected/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-kube-api-access-bqlkv\") pod \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.730243 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-inventory\") pod \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\" (UID: \"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439\") " Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.745313 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-kube-api-access-bqlkv" (OuterVolumeSpecName: "kube-api-access-bqlkv") pod "4144f6ad-f95f-4e2e-a9d3-003cdc5ef439" (UID: "4144f6ad-f95f-4e2e-a9d3-003cdc5ef439"). InnerVolumeSpecName "kube-api-access-bqlkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.757573 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-inventory" (OuterVolumeSpecName: "inventory") pod "4144f6ad-f95f-4e2e-a9d3-003cdc5ef439" (UID: "4144f6ad-f95f-4e2e-a9d3-003cdc5ef439"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.759182 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4144f6ad-f95f-4e2e-a9d3-003cdc5ef439" (UID: "4144f6ad-f95f-4e2e-a9d3-003cdc5ef439"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.833527 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.833567 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:17:10 crc kubenswrapper[4637]: I1201 15:17:10.833604 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqlkv\" (UniqueName: \"kubernetes.io/projected/4144f6ad-f95f-4e2e-a9d3-003cdc5ef439-kube-api-access-bqlkv\") on node \"crc\" DevicePath \"\"" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.215358 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" event={"ID":"4144f6ad-f95f-4e2e-a9d3-003cdc5ef439","Type":"ContainerDied","Data":"d44dfa851fcab4b883470171a552a3a604ca38091bf5f50f54322c4472d038bc"} Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.215832 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44dfa851fcab4b883470171a552a3a604ca38091bf5f50f54322c4472d038bc" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.215387 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jmj6p" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.312560 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q"] Dec 01 15:17:11 crc kubenswrapper[4637]: E1201 15:17:11.313426 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4144f6ad-f95f-4e2e-a9d3-003cdc5ef439" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.313522 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="4144f6ad-f95f-4e2e-a9d3-003cdc5ef439" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.313786 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="4144f6ad-f95f-4e2e-a9d3-003cdc5ef439" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.314857 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.322150 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.322712 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.323252 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q"] Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.323278 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.323759 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.448287 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzfm\" (UniqueName: \"kubernetes.io/projected/d76c43c1-7a6f-41c6-b052-5363182c236c-kube-api-access-jxzfm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.448827 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.449029 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.550725 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxzfm\" (UniqueName: \"kubernetes.io/projected/d76c43c1-7a6f-41c6-b052-5363182c236c-kube-api-access-jxzfm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.550811 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.550872 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.556267 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.556960 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.577262 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxzfm\" (UniqueName: \"kubernetes.io/projected/d76c43c1-7a6f-41c6-b052-5363182c236c-kube-api-access-jxzfm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-prh4q\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:11 crc kubenswrapper[4637]: I1201 15:17:11.651591 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:17:12 crc kubenswrapper[4637]: I1201 15:17:12.178039 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q"] Dec 01 15:17:12 crc kubenswrapper[4637]: I1201 15:17:12.197637 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:17:12 crc kubenswrapper[4637]: I1201 15:17:12.226420 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" event={"ID":"d76c43c1-7a6f-41c6-b052-5363182c236c","Type":"ContainerStarted","Data":"584377d4fd1f86e14c7faf5386ec1aee4e37b7a1869dbd419836cf449259aeaa"} Dec 01 15:17:13 crc kubenswrapper[4637]: I1201 15:17:13.234264 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" event={"ID":"d76c43c1-7a6f-41c6-b052-5363182c236c","Type":"ContainerStarted","Data":"6a533b5715abf99d2940e8a2c191e57108f14801192ff2058c0547252e92f52b"} Dec 01 15:17:13 crc kubenswrapper[4637]: I1201 15:17:13.257452 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" podStartSLOduration=1.5830217960000001 podStartE2EDuration="2.257434905s" podCreationTimestamp="2025-12-01 15:17:11 +0000 UTC" firstStartedPulling="2025-12-01 15:17:12.197451923 +0000 UTC m=+1882.715160751" lastFinishedPulling="2025-12-01 15:17:12.871865022 +0000 UTC m=+1883.389573860" observedRunningTime="2025-12-01 15:17:13.248417292 +0000 UTC m=+1883.766126110" watchObservedRunningTime="2025-12-01 15:17:13.257434905 +0000 UTC m=+1883.775143733" Dec 01 15:17:17 crc kubenswrapper[4637]: I1201 15:17:17.039089 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ccqz9"] Dec 01 15:17:17 crc kubenswrapper[4637]: I1201 15:17:17.048542 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ccqz9"] Dec 01 15:17:17 crc kubenswrapper[4637]: I1201 15:17:17.782851 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba29815f-9b56-490a-ad00-b358c7328ec9" path="/var/lib/kubelet/pods/ba29815f-9b56-490a-ad00-b358c7328ec9/volumes" Dec 01 15:17:33 crc kubenswrapper[4637]: I1201 15:17:33.926979 4637 scope.go:117] "RemoveContainer" containerID="96fdcf9ed7c60cc3388ac2f415865a265fcd89ba46e4afc7a438d5c178f3835d" Dec 01 15:18:14 crc kubenswrapper[4637]: I1201 15:18:14.791815 4637 generic.go:334] "Generic (PLEG): container finished" podID="d76c43c1-7a6f-41c6-b052-5363182c236c" containerID="6a533b5715abf99d2940e8a2c191e57108f14801192ff2058c0547252e92f52b" exitCode=0 Dec 01 15:18:14 crc kubenswrapper[4637]: I1201 15:18:14.792370 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" event={"ID":"d76c43c1-7a6f-41c6-b052-5363182c236c","Type":"ContainerDied","Data":"6a533b5715abf99d2940e8a2c191e57108f14801192ff2058c0547252e92f52b"} Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.176530 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.185309 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-inventory\") pod \"d76c43c1-7a6f-41c6-b052-5363182c236c\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.185412 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-ssh-key\") pod \"d76c43c1-7a6f-41c6-b052-5363182c236c\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.185474 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxzfm\" (UniqueName: \"kubernetes.io/projected/d76c43c1-7a6f-41c6-b052-5363182c236c-kube-api-access-jxzfm\") pod \"d76c43c1-7a6f-41c6-b052-5363182c236c\" (UID: \"d76c43c1-7a6f-41c6-b052-5363182c236c\") " Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.192882 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76c43c1-7a6f-41c6-b052-5363182c236c-kube-api-access-jxzfm" (OuterVolumeSpecName: "kube-api-access-jxzfm") pod "d76c43c1-7a6f-41c6-b052-5363182c236c" (UID: "d76c43c1-7a6f-41c6-b052-5363182c236c"). InnerVolumeSpecName "kube-api-access-jxzfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.228119 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-inventory" (OuterVolumeSpecName: "inventory") pod "d76c43c1-7a6f-41c6-b052-5363182c236c" (UID: "d76c43c1-7a6f-41c6-b052-5363182c236c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.230408 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d76c43c1-7a6f-41c6-b052-5363182c236c" (UID: "d76c43c1-7a6f-41c6-b052-5363182c236c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.287232 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxzfm\" (UniqueName: \"kubernetes.io/projected/d76c43c1-7a6f-41c6-b052-5363182c236c-kube-api-access-jxzfm\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.287415 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.287476 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d76c43c1-7a6f-41c6-b052-5363182c236c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.817357 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" event={"ID":"d76c43c1-7a6f-41c6-b052-5363182c236c","Type":"ContainerDied","Data":"584377d4fd1f86e14c7faf5386ec1aee4e37b7a1869dbd419836cf449259aeaa"} Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.817394 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="584377d4fd1f86e14c7faf5386ec1aee4e37b7a1869dbd419836cf449259aeaa" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.817442 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-prh4q" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.947358 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cmb78"] Dec 01 15:18:16 crc kubenswrapper[4637]: E1201 15:18:16.948087 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76c43c1-7a6f-41c6-b052-5363182c236c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.948111 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76c43c1-7a6f-41c6-b052-5363182c236c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.948306 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76c43c1-7a6f-41c6-b052-5363182c236c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.949011 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.951539 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.951767 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.952324 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.960746 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.970518 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cmb78"] Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.997819 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txldx\" (UniqueName: \"kubernetes.io/projected/ff7c15eb-bee2-412f-8689-ba46478d7b33-kube-api-access-txldx\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.997871 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:16 crc kubenswrapper[4637]: I1201 15:18:16.997901 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:17 crc kubenswrapper[4637]: I1201 15:18:17.099867 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txldx\" (UniqueName: \"kubernetes.io/projected/ff7c15eb-bee2-412f-8689-ba46478d7b33-kube-api-access-txldx\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:17 crc kubenswrapper[4637]: I1201 15:18:17.099954 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:17 crc kubenswrapper[4637]: I1201 15:18:17.099988 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:17 crc kubenswrapper[4637]: I1201 15:18:17.104712 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:17 crc kubenswrapper[4637]: I1201 15:18:17.114744 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:17 crc kubenswrapper[4637]: I1201 15:18:17.121695 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txldx\" (UniqueName: \"kubernetes.io/projected/ff7c15eb-bee2-412f-8689-ba46478d7b33-kube-api-access-txldx\") pod \"ssh-known-hosts-edpm-deployment-cmb78\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:17 crc kubenswrapper[4637]: I1201 15:18:17.265234 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:17 crc kubenswrapper[4637]: I1201 15:18:17.868243 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cmb78"] Dec 01 15:18:18 crc kubenswrapper[4637]: I1201 15:18:18.834876 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" event={"ID":"ff7c15eb-bee2-412f-8689-ba46478d7b33","Type":"ContainerStarted","Data":"c96fb6a3eca854aceb71b3f7f9ff52edfdc9680a80a2790799546f86c698305e"} Dec 01 15:18:18 crc kubenswrapper[4637]: I1201 15:18:18.835280 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" event={"ID":"ff7c15eb-bee2-412f-8689-ba46478d7b33","Type":"ContainerStarted","Data":"162ff2c251f2ff67a3e5f9b7d6d8ef806ab157a799653b31930a9109cebbba9a"} Dec 01 15:18:18 crc kubenswrapper[4637]: I1201 15:18:18.856156 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" podStartSLOduration=2.383097457 podStartE2EDuration="2.856131277s" podCreationTimestamp="2025-12-01 15:18:16 +0000 UTC" firstStartedPulling="2025-12-01 15:18:17.88437507 +0000 UTC m=+1948.402083898" lastFinishedPulling="2025-12-01 15:18:18.35740889 +0000 UTC m=+1948.875117718" observedRunningTime="2025-12-01 15:18:18.852469391 +0000 UTC m=+1949.370178229" watchObservedRunningTime="2025-12-01 15:18:18.856131277 +0000 UTC m=+1949.373840105" Dec 01 15:18:26 crc kubenswrapper[4637]: I1201 15:18:26.918415 4637 generic.go:334] "Generic (PLEG): container finished" podID="ff7c15eb-bee2-412f-8689-ba46478d7b33" containerID="c96fb6a3eca854aceb71b3f7f9ff52edfdc9680a80a2790799546f86c698305e" exitCode=0 Dec 01 15:18:26 crc kubenswrapper[4637]: I1201 15:18:26.918482 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" event={"ID":"ff7c15eb-bee2-412f-8689-ba46478d7b33","Type":"ContainerDied","Data":"c96fb6a3eca854aceb71b3f7f9ff52edfdc9680a80a2790799546f86c698305e"} Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.330088 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.434363 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-ssh-key-openstack-edpm-ipam\") pod \"ff7c15eb-bee2-412f-8689-ba46478d7b33\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.434459 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-inventory-0\") pod \"ff7c15eb-bee2-412f-8689-ba46478d7b33\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.434609 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txldx\" (UniqueName: \"kubernetes.io/projected/ff7c15eb-bee2-412f-8689-ba46478d7b33-kube-api-access-txldx\") pod \"ff7c15eb-bee2-412f-8689-ba46478d7b33\" (UID: \"ff7c15eb-bee2-412f-8689-ba46478d7b33\") " Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.442121 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7c15eb-bee2-412f-8689-ba46478d7b33-kube-api-access-txldx" (OuterVolumeSpecName: "kube-api-access-txldx") pod "ff7c15eb-bee2-412f-8689-ba46478d7b33" (UID: "ff7c15eb-bee2-412f-8689-ba46478d7b33"). InnerVolumeSpecName "kube-api-access-txldx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.460406 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff7c15eb-bee2-412f-8689-ba46478d7b33" (UID: "ff7c15eb-bee2-412f-8689-ba46478d7b33"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.468033 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ff7c15eb-bee2-412f-8689-ba46478d7b33" (UID: "ff7c15eb-bee2-412f-8689-ba46478d7b33"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.537410 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txldx\" (UniqueName: \"kubernetes.io/projected/ff7c15eb-bee2-412f-8689-ba46478d7b33-kube-api-access-txldx\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.537970 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.538073 4637 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ff7c15eb-bee2-412f-8689-ba46478d7b33-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.934834 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" event={"ID":"ff7c15eb-bee2-412f-8689-ba46478d7b33","Type":"ContainerDied","Data":"162ff2c251f2ff67a3e5f9b7d6d8ef806ab157a799653b31930a9109cebbba9a"} Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.935069 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162ff2c251f2ff67a3e5f9b7d6d8ef806ab157a799653b31930a9109cebbba9a" Dec 01 15:18:28 crc kubenswrapper[4637]: I1201 15:18:28.934952 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cmb78" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.020816 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6"] Dec 01 15:18:29 crc kubenswrapper[4637]: E1201 15:18:29.021288 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7c15eb-bee2-412f-8689-ba46478d7b33" containerName="ssh-known-hosts-edpm-deployment" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.021307 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7c15eb-bee2-412f-8689-ba46478d7b33" containerName="ssh-known-hosts-edpm-deployment" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.021533 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7c15eb-bee2-412f-8689-ba46478d7b33" containerName="ssh-known-hosts-edpm-deployment" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.022813 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.025669 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.025840 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.025984 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.026116 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.042718 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6"] Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.147575 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkh7h\" (UniqueName: \"kubernetes.io/projected/b2f971f6-6729-4d92-9849-2c03e6d0747b-kube-api-access-bkh7h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.147636 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.147726 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.249409 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.249558 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkh7h\" (UniqueName: \"kubernetes.io/projected/b2f971f6-6729-4d92-9849-2c03e6d0747b-kube-api-access-bkh7h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.249584 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.255372 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.259992 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.270373 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkh7h\" (UniqueName: \"kubernetes.io/projected/b2f971f6-6729-4d92-9849-2c03e6d0747b-kube-api-access-bkh7h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5ctt6\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:29 crc kubenswrapper[4637]: I1201 15:18:29.342512 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:30 crc kubenswrapper[4637]: I1201 15:18:30.165740 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6"] Dec 01 15:18:30 crc kubenswrapper[4637]: I1201 15:18:30.949477 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" event={"ID":"b2f971f6-6729-4d92-9849-2c03e6d0747b","Type":"ContainerStarted","Data":"1e50ab4c3452ef281bc0f1bc5c2184b29f29ba8ce3543e39fae9823555eb7a29"} Dec 01 15:18:31 crc kubenswrapper[4637]: I1201 15:18:31.959245 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" event={"ID":"b2f971f6-6729-4d92-9849-2c03e6d0747b","Type":"ContainerStarted","Data":"84af6e9d737a0e930d5d0a7a2ffd43c06c1f584768da5f5c3096922b1aa07763"} Dec 01 15:18:31 crc kubenswrapper[4637]: I1201 15:18:31.978200 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" podStartSLOduration=3.433019737 podStartE2EDuration="3.978166787s" podCreationTimestamp="2025-12-01 15:18:28 +0000 UTC" firstStartedPulling="2025-12-01 15:18:30.169072064 +0000 UTC m=+1960.686780892" lastFinishedPulling="2025-12-01 15:18:30.714219114 +0000 UTC m=+1961.231927942" observedRunningTime="2025-12-01 15:18:31.973130787 +0000 UTC m=+1962.490839615" watchObservedRunningTime="2025-12-01 15:18:31.978166787 +0000 UTC m=+1962.495875615" Dec 01 15:18:34 crc kubenswrapper[4637]: I1201 15:18:34.017013 4637 scope.go:117] "RemoveContainer" containerID="a0b177fb2a4752086bcfbabade4accf3dd6afbe4e55f8c23c33acaf8038df488" Dec 01 15:18:34 crc kubenswrapper[4637]: I1201 15:18:34.037125 4637 scope.go:117] "RemoveContainer" containerID="cf3c5f74a679cf7926ba56e97fc17432eb98877ef3ffdb3f8fe75b00d5b85ad0" Dec 01 15:18:40 crc kubenswrapper[4637]: I1201 15:18:40.025311 4637 generic.go:334] "Generic (PLEG): container finished" podID="b2f971f6-6729-4d92-9849-2c03e6d0747b" containerID="84af6e9d737a0e930d5d0a7a2ffd43c06c1f584768da5f5c3096922b1aa07763" exitCode=0 Dec 01 15:18:40 crc kubenswrapper[4637]: I1201 15:18:40.025399 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" event={"ID":"b2f971f6-6729-4d92-9849-2c03e6d0747b","Type":"ContainerDied","Data":"84af6e9d737a0e930d5d0a7a2ffd43c06c1f584768da5f5c3096922b1aa07763"} Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.424697 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.609876 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-inventory\") pod \"b2f971f6-6729-4d92-9849-2c03e6d0747b\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.610328 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkh7h\" (UniqueName: \"kubernetes.io/projected/b2f971f6-6729-4d92-9849-2c03e6d0747b-kube-api-access-bkh7h\") pod \"b2f971f6-6729-4d92-9849-2c03e6d0747b\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.610493 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-ssh-key\") pod \"b2f971f6-6729-4d92-9849-2c03e6d0747b\" (UID: \"b2f971f6-6729-4d92-9849-2c03e6d0747b\") " Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.618342 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f971f6-6729-4d92-9849-2c03e6d0747b-kube-api-access-bkh7h" (OuterVolumeSpecName: "kube-api-access-bkh7h") pod "b2f971f6-6729-4d92-9849-2c03e6d0747b" (UID: "b2f971f6-6729-4d92-9849-2c03e6d0747b"). InnerVolumeSpecName "kube-api-access-bkh7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.640957 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b2f971f6-6729-4d92-9849-2c03e6d0747b" (UID: "b2f971f6-6729-4d92-9849-2c03e6d0747b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.643649 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-inventory" (OuterVolumeSpecName: "inventory") pod "b2f971f6-6729-4d92-9849-2c03e6d0747b" (UID: "b2f971f6-6729-4d92-9849-2c03e6d0747b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.712454 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.712492 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f971f6-6729-4d92-9849-2c03e6d0747b-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:41 crc kubenswrapper[4637]: I1201 15:18:41.712502 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkh7h\" (UniqueName: \"kubernetes.io/projected/b2f971f6-6729-4d92-9849-2c03e6d0747b-kube-api-access-bkh7h\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.041830 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" event={"ID":"b2f971f6-6729-4d92-9849-2c03e6d0747b","Type":"ContainerDied","Data":"1e50ab4c3452ef281bc0f1bc5c2184b29f29ba8ce3543e39fae9823555eb7a29"} Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.042150 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e50ab4c3452ef281bc0f1bc5c2184b29f29ba8ce3543e39fae9823555eb7a29" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.042045 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5ctt6" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.240966 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq"] Dec 01 15:18:42 crc kubenswrapper[4637]: E1201 15:18:42.252186 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f971f6-6729-4d92-9849-2c03e6d0747b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.252256 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f971f6-6729-4d92-9849-2c03e6d0747b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.252884 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f971f6-6729-4d92-9849-2c03e6d0747b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.253920 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.260698 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.261056 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.261869 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.262122 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.269949 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq"] Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.427759 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.427820 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.428584 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmv7\" (UniqueName: \"kubernetes.io/projected/d3b484f7-438b-4ea9-9529-9ba5a49fca84-kube-api-access-nnmv7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.530891 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmv7\" (UniqueName: \"kubernetes.io/projected/d3b484f7-438b-4ea9-9529-9ba5a49fca84-kube-api-access-nnmv7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.531006 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.531043 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.536476 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.536861 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.550694 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmv7\" (UniqueName: \"kubernetes.io/projected/d3b484f7-438b-4ea9-9529-9ba5a49fca84-kube-api-access-nnmv7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:42 crc kubenswrapper[4637]: I1201 15:18:42.585066 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:43 crc kubenswrapper[4637]: I1201 15:18:43.191979 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq"] Dec 01 15:18:44 crc kubenswrapper[4637]: I1201 15:18:44.056689 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" event={"ID":"d3b484f7-438b-4ea9-9529-9ba5a49fca84","Type":"ContainerStarted","Data":"b95661bd5e0da49bc9e16d1b2e676f18b1a84d296e6cc22042f08fa658cdd56c"} Dec 01 15:18:45 crc kubenswrapper[4637]: I1201 15:18:45.065723 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" event={"ID":"d3b484f7-438b-4ea9-9529-9ba5a49fca84","Type":"ContainerStarted","Data":"a3536c28c353bcacb2cfa7c815358cccf7c4fb004430c9d1604749a546a30f35"} Dec 01 15:18:45 crc kubenswrapper[4637]: I1201 15:18:45.082724 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" podStartSLOduration=2.006214687 podStartE2EDuration="3.082705377s" podCreationTimestamp="2025-12-01 15:18:42 +0000 UTC" firstStartedPulling="2025-12-01 15:18:43.175230315 +0000 UTC m=+1973.692939143" lastFinishedPulling="2025-12-01 15:18:44.251721005 +0000 UTC m=+1974.769429833" observedRunningTime="2025-12-01 15:18:45.077521452 +0000 UTC m=+1975.595230280" watchObservedRunningTime="2025-12-01 15:18:45.082705377 +0000 UTC m=+1975.600414205" Dec 01 15:18:55 crc kubenswrapper[4637]: I1201 15:18:55.153864 4637 generic.go:334] "Generic (PLEG): container finished" podID="d3b484f7-438b-4ea9-9529-9ba5a49fca84" containerID="a3536c28c353bcacb2cfa7c815358cccf7c4fb004430c9d1604749a546a30f35" exitCode=0 Dec 01 15:18:55 crc kubenswrapper[4637]: I1201 15:18:55.153969 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" event={"ID":"d3b484f7-438b-4ea9-9529-9ba5a49fca84","Type":"ContainerDied","Data":"a3536c28c353bcacb2cfa7c815358cccf7c4fb004430c9d1604749a546a30f35"} Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.590852 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.728832 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-inventory\") pod \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.729120 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnmv7\" (UniqueName: \"kubernetes.io/projected/d3b484f7-438b-4ea9-9529-9ba5a49fca84-kube-api-access-nnmv7\") pod \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.729153 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-ssh-key\") pod \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\" (UID: \"d3b484f7-438b-4ea9-9529-9ba5a49fca84\") " Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.743537 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b484f7-438b-4ea9-9529-9ba5a49fca84-kube-api-access-nnmv7" (OuterVolumeSpecName: "kube-api-access-nnmv7") pod "d3b484f7-438b-4ea9-9529-9ba5a49fca84" (UID: "d3b484f7-438b-4ea9-9529-9ba5a49fca84"). InnerVolumeSpecName "kube-api-access-nnmv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.839944 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnmv7\" (UniqueName: \"kubernetes.io/projected/d3b484f7-438b-4ea9-9529-9ba5a49fca84-kube-api-access-nnmv7\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.841161 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-inventory" (OuterVolumeSpecName: "inventory") pod "d3b484f7-438b-4ea9-9529-9ba5a49fca84" (UID: "d3b484f7-438b-4ea9-9529-9ba5a49fca84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.900800 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d3b484f7-438b-4ea9-9529-9ba5a49fca84" (UID: "d3b484f7-438b-4ea9-9529-9ba5a49fca84"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.941364 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:56 crc kubenswrapper[4637]: I1201 15:18:56.941400 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3b484f7-438b-4ea9-9529-9ba5a49fca84-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.177679 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" event={"ID":"d3b484f7-438b-4ea9-9529-9ba5a49fca84","Type":"ContainerDied","Data":"b95661bd5e0da49bc9e16d1b2e676f18b1a84d296e6cc22042f08fa658cdd56c"} Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.177941 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95661bd5e0da49bc9e16d1b2e676f18b1a84d296e6cc22042f08fa658cdd56c" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.177753 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.269786 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v"] Dec 01 15:18:57 crc kubenswrapper[4637]: E1201 15:18:57.270172 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b484f7-438b-4ea9-9529-9ba5a49fca84" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.270193 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b484f7-438b-4ea9-9529-9ba5a49fca84" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.270384 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b484f7-438b-4ea9-9529-9ba5a49fca84" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.270987 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: W1201 15:18:57.278540 4637 reflector.go:561] object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0": failed to list *v1.Secret: secrets "openstack-edpm-ipam-ovn-default-certs-0" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 01 15:18:57 crc kubenswrapper[4637]: W1201 15:18:57.278573 4637 reflector.go:561] object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0": failed to list *v1.Secret: secrets "openstack-edpm-ipam-libvirt-default-certs-0" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 01 15:18:57 crc kubenswrapper[4637]: E1201 15:18:57.278617 4637 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-edpm-ipam-libvirt-default-certs-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-edpm-ipam-libvirt-default-certs-0\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 15:18:57 crc kubenswrapper[4637]: E1201 15:18:57.278619 4637 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-edpm-ipam-ovn-default-certs-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-edpm-ipam-ovn-default-certs-0\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.278748 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.278834 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:18:57 crc kubenswrapper[4637]: W1201 15:18:57.278858 4637 reflector.go:561] object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0": failed to list *v1.Secret: secrets "openstack-edpm-ipam-neutron-metadata-default-certs-0" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 01 15:18:57 crc kubenswrapper[4637]: E1201 15:18:57.278878 4637 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-edpm-ipam-neutron-metadata-default-certs-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.278782 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.283832 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.284412 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.307117 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v"] Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.349857 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.349921 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.349980 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350007 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bt9\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-kube-api-access-j7bt9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350028 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350066 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350172 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350191 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350210 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350239 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350276 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350294 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350311 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.350340 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452302 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452347 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452381 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452417 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452493 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452520 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452536 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452561 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452596 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452622 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452652 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452673 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7bt9\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-kube-api-access-j7bt9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452702 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.452737 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.463765 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.463848 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.465473 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.465827 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.471521 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.471890 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.473511 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.476113 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7bt9\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-kube-api-access-j7bt9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.477251 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.478845 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:57 crc kubenswrapper[4637]: I1201 15:18:57.479141 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:58 crc kubenswrapper[4637]: I1201 15:18:58.339205 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 01 15:18:58 crc kubenswrapper[4637]: I1201 15:18:58.359277 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:58 crc kubenswrapper[4637]: E1201 15:18:58.470276 4637 projected.go:263] Couldn't get secret openstack/openstack-edpm-ipam-libvirt-default-certs-0: failed to sync secret cache: timed out waiting for the condition Dec 01 15:18:58 crc kubenswrapper[4637]: E1201 15:18:58.470651 4637 projected.go:194] Error preparing data for projected volume openstack-edpm-ipam-libvirt-default-certs-0 for pod openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v: failed to sync secret cache: timed out waiting for the condition Dec 01 15:18:58 crc kubenswrapper[4637]: E1201 15:18:58.470736 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0 podName:1e434aff-123b-42e2-8c40-c82c0bd5aabe nodeName:}" failed. No retries permitted until 2025-12-01 15:18:58.970709756 +0000 UTC m=+1989.488418584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-edpm-ipam-libvirt-default-certs-0" (UniqueName: "kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0") pod "install-certs-edpm-deployment-openstack-edpm-ipam-s548v" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe") : failed to sync secret cache: timed out waiting for the condition Dec 01 15:18:58 crc kubenswrapper[4637]: E1201 15:18:58.471083 4637 projected.go:263] Couldn't get secret openstack/openstack-edpm-ipam-ovn-default-certs-0: failed to sync secret cache: timed out waiting for the condition Dec 01 15:18:58 crc kubenswrapper[4637]: E1201 15:18:58.471105 4637 projected.go:194] Error preparing data for projected volume openstack-edpm-ipam-ovn-default-certs-0 for pod openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v: failed to sync secret cache: timed out waiting for the condition Dec 01 15:18:58 crc kubenswrapper[4637]: E1201 15:18:58.471143 4637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0 podName:1e434aff-123b-42e2-8c40-c82c0bd5aabe nodeName:}" failed. No retries permitted until 2025-12-01 15:18:58.971132897 +0000 UTC m=+1989.488841725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-edpm-ipam-ovn-default-certs-0" (UniqueName: "kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0") pod "install-certs-edpm-deployment-openstack-edpm-ipam-s548v" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe") : failed to sync secret cache: timed out waiting for the condition Dec 01 15:18:58 crc kubenswrapper[4637]: I1201 15:18:58.513364 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 01 15:18:58 crc kubenswrapper[4637]: I1201 15:18:58.535610 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 01 15:18:58 crc kubenswrapper[4637]: I1201 15:18:58.983382 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:58 crc kubenswrapper[4637]: I1201 15:18:58.983521 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:58 crc kubenswrapper[4637]: I1201 15:18:58.988509 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:58 crc kubenswrapper[4637]: I1201 15:18:58.990564 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s548v\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:59 crc kubenswrapper[4637]: I1201 15:18:59.085670 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:18:59 crc kubenswrapper[4637]: I1201 15:18:59.713839 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v"] Dec 01 15:19:00 crc kubenswrapper[4637]: I1201 15:19:00.215249 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" event={"ID":"1e434aff-123b-42e2-8c40-c82c0bd5aabe","Type":"ContainerStarted","Data":"6f138e58ae6b6e70c72c89288e7953b0c00c47873f294c17e9c5bc80dfb09994"} Dec 01 15:19:01 crc kubenswrapper[4637]: I1201 15:19:01.230817 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" event={"ID":"1e434aff-123b-42e2-8c40-c82c0bd5aabe","Type":"ContainerStarted","Data":"2b156198409d0dc4544417699250eca2eabb3b49d64f2b81f1eeaca89e418eb9"} Dec 01 15:19:15 crc kubenswrapper[4637]: I1201 15:19:15.613298 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:19:15 crc kubenswrapper[4637]: I1201 15:19:15.613885 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:19:34 crc kubenswrapper[4637]: I1201 15:19:34.132558 4637 scope.go:117] "RemoveContainer" containerID="9810c839c734dc2891324d8d2a636a5dc04348fdd7fcc6aa436dcc11a87498b9" Dec 01 15:19:43 crc kubenswrapper[4637]: I1201 15:19:43.651817 4637 generic.go:334] "Generic (PLEG): container finished" podID="1e434aff-123b-42e2-8c40-c82c0bd5aabe" containerID="2b156198409d0dc4544417699250eca2eabb3b49d64f2b81f1eeaca89e418eb9" exitCode=0 Dec 01 15:19:43 crc kubenswrapper[4637]: I1201 15:19:43.651878 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" event={"ID":"1e434aff-123b-42e2-8c40-c82c0bd5aabe","Type":"ContainerDied","Data":"2b156198409d0dc4544417699250eca2eabb3b49d64f2b81f1eeaca89e418eb9"} Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.044867 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.217725 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-telemetry-combined-ca-bundle\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.217797 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.217825 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-libvirt-combined-ca-bundle\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.217853 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ssh-key\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.217904 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.217921 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-inventory\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.218006 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.218111 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-nova-combined-ca-bundle\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.218172 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.218196 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-neutron-metadata-combined-ca-bundle\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.218214 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-bootstrap-combined-ca-bundle\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.218252 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7bt9\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-kube-api-access-j7bt9\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.218278 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-repo-setup-combined-ca-bundle\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.218314 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ovn-combined-ca-bundle\") pod \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\" (UID: \"1e434aff-123b-42e2-8c40-c82c0bd5aabe\") " Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.223848 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.223882 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.223964 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.224412 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.225738 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.227188 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.227581 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.228547 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.228600 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.229157 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.229495 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-kube-api-access-j7bt9" (OuterVolumeSpecName: "kube-api-access-j7bt9") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "kube-api-access-j7bt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.234179 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.250181 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.251807 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-inventory" (OuterVolumeSpecName: "inventory") pod "1e434aff-123b-42e2-8c40-c82c0bd5aabe" (UID: "1e434aff-123b-42e2-8c40-c82c0bd5aabe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.320903 4637 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.320957 4637 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.320972 4637 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.320983 4637 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.320995 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7bt9\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-kube-api-access-j7bt9\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321007 4637 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321016 4637 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321027 4637 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321035 4637 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321043 4637 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321052 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321060 4637 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321069 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e434aff-123b-42e2-8c40-c82c0bd5aabe-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.321079 4637 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1e434aff-123b-42e2-8c40-c82c0bd5aabe-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.613902 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.613970 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.678811 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" event={"ID":"1e434aff-123b-42e2-8c40-c82c0bd5aabe","Type":"ContainerDied","Data":"6f138e58ae6b6e70c72c89288e7953b0c00c47873f294c17e9c5bc80dfb09994"} Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.679044 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f138e58ae6b6e70c72c89288e7953b0c00c47873f294c17e9c5bc80dfb09994" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.678866 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s548v" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.818851 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc"] Dec 01 15:19:45 crc kubenswrapper[4637]: E1201 15:19:45.820276 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e434aff-123b-42e2-8c40-c82c0bd5aabe" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.820410 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e434aff-123b-42e2-8c40-c82c0bd5aabe" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.820824 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e434aff-123b-42e2-8c40-c82c0bd5aabe" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.821716 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.825085 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.826255 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.826349 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.826255 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.826863 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.833952 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc"] Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.935161 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gm6\" (UniqueName: \"kubernetes.io/projected/07986dae-e60d-4809-88fe-cbd86b27ef81-kube-api-access-k7gm6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.935251 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.935297 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.935322 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/07986dae-e60d-4809-88fe-cbd86b27ef81-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:45 crc kubenswrapper[4637]: I1201 15:19:45.935437 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.037522 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.037597 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.037620 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/07986dae-e60d-4809-88fe-cbd86b27ef81-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.037700 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.037761 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gm6\" (UniqueName: \"kubernetes.io/projected/07986dae-e60d-4809-88fe-cbd86b27ef81-kube-api-access-k7gm6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.038853 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/07986dae-e60d-4809-88fe-cbd86b27ef81-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.043839 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.044612 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.049430 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.056178 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7gm6\" (UniqueName: \"kubernetes.io/projected/07986dae-e60d-4809-88fe-cbd86b27ef81-kube-api-access-k7gm6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n9fmc\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.175827 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:19:46 crc kubenswrapper[4637]: I1201 15:19:46.742802 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc"] Dec 01 15:19:46 crc kubenswrapper[4637]: W1201 15:19:46.756409 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07986dae_e60d_4809_88fe_cbd86b27ef81.slice/crio-92dd0715b1a27da5947292f717df1a2253bc73a0cd8e6530e41b9436f7e97ba2 WatchSource:0}: Error finding container 92dd0715b1a27da5947292f717df1a2253bc73a0cd8e6530e41b9436f7e97ba2: Status 404 returned error can't find the container with id 92dd0715b1a27da5947292f717df1a2253bc73a0cd8e6530e41b9436f7e97ba2 Dec 01 15:19:47 crc kubenswrapper[4637]: I1201 15:19:47.698566 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" event={"ID":"07986dae-e60d-4809-88fe-cbd86b27ef81","Type":"ContainerStarted","Data":"00a98821cb50f2ecc3d54505c005249caa1b38e584f173be08b117a05cb4cabb"} Dec 01 15:19:47 crc kubenswrapper[4637]: I1201 15:19:47.698974 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" event={"ID":"07986dae-e60d-4809-88fe-cbd86b27ef81","Type":"ContainerStarted","Data":"92dd0715b1a27da5947292f717df1a2253bc73a0cd8e6530e41b9436f7e97ba2"} Dec 01 15:19:47 crc kubenswrapper[4637]: I1201 15:19:47.714773 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" podStartSLOduration=2.229302363 podStartE2EDuration="2.714751266s" podCreationTimestamp="2025-12-01 15:19:45 +0000 UTC" firstStartedPulling="2025-12-01 15:19:46.760743028 +0000 UTC m=+2037.278451856" lastFinishedPulling="2025-12-01 15:19:47.246191931 +0000 UTC m=+2037.763900759" observedRunningTime="2025-12-01 15:19:47.714261893 +0000 UTC m=+2038.231970721" watchObservedRunningTime="2025-12-01 15:19:47.714751266 +0000 UTC m=+2038.232460094" Dec 01 15:20:15 crc kubenswrapper[4637]: I1201 15:20:15.613154 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:20:15 crc kubenswrapper[4637]: I1201 15:20:15.613829 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:20:15 crc kubenswrapper[4637]: I1201 15:20:15.613890 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:20:15 crc kubenswrapper[4637]: I1201 15:20:15.614578 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5fd51a07a1e38b401f16a4fb3406e418b65fdf2fe7bd0db652e567f20de551e"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:20:15 crc kubenswrapper[4637]: I1201 15:20:15.614650 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://c5fd51a07a1e38b401f16a4fb3406e418b65fdf2fe7bd0db652e567f20de551e" gracePeriod=600 Dec 01 15:20:15 crc kubenswrapper[4637]: I1201 15:20:15.922795 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="c5fd51a07a1e38b401f16a4fb3406e418b65fdf2fe7bd0db652e567f20de551e" exitCode=0 Dec 01 15:20:15 crc kubenswrapper[4637]: I1201 15:20:15.922838 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"c5fd51a07a1e38b401f16a4fb3406e418b65fdf2fe7bd0db652e567f20de551e"} Dec 01 15:20:15 crc kubenswrapper[4637]: I1201 15:20:15.923129 4637 scope.go:117] "RemoveContainer" containerID="6c543d095534b519790b04aff7e0503376283f52da72b1cab02e06953014abf8" Dec 01 15:20:16 crc kubenswrapper[4637]: I1201 15:20:16.935976 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38"} Dec 01 15:20:58 crc kubenswrapper[4637]: I1201 15:20:58.312879 4637 generic.go:334] "Generic (PLEG): container finished" podID="07986dae-e60d-4809-88fe-cbd86b27ef81" containerID="00a98821cb50f2ecc3d54505c005249caa1b38e584f173be08b117a05cb4cabb" exitCode=0 Dec 01 15:20:58 crc kubenswrapper[4637]: I1201 15:20:58.312965 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" event={"ID":"07986dae-e60d-4809-88fe-cbd86b27ef81","Type":"ContainerDied","Data":"00a98821cb50f2ecc3d54505c005249caa1b38e584f173be08b117a05cb4cabb"} Dec 01 15:20:59 crc kubenswrapper[4637]: I1201 15:20:59.913367 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.003730 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ovn-combined-ca-bundle\") pod \"07986dae-e60d-4809-88fe-cbd86b27ef81\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.003814 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ssh-key\") pod \"07986dae-e60d-4809-88fe-cbd86b27ef81\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.003861 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7gm6\" (UniqueName: \"kubernetes.io/projected/07986dae-e60d-4809-88fe-cbd86b27ef81-kube-api-access-k7gm6\") pod \"07986dae-e60d-4809-88fe-cbd86b27ef81\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.003894 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-inventory\") pod \"07986dae-e60d-4809-88fe-cbd86b27ef81\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.004108 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/07986dae-e60d-4809-88fe-cbd86b27ef81-ovncontroller-config-0\") pod \"07986dae-e60d-4809-88fe-cbd86b27ef81\" (UID: \"07986dae-e60d-4809-88fe-cbd86b27ef81\") " Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.065777 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07986dae-e60d-4809-88fe-cbd86b27ef81-kube-api-access-k7gm6" (OuterVolumeSpecName: "kube-api-access-k7gm6") pod "07986dae-e60d-4809-88fe-cbd86b27ef81" (UID: "07986dae-e60d-4809-88fe-cbd86b27ef81"). InnerVolumeSpecName "kube-api-access-k7gm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.066787 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "07986dae-e60d-4809-88fe-cbd86b27ef81" (UID: "07986dae-e60d-4809-88fe-cbd86b27ef81"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.071264 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-inventory" (OuterVolumeSpecName: "inventory") pod "07986dae-e60d-4809-88fe-cbd86b27ef81" (UID: "07986dae-e60d-4809-88fe-cbd86b27ef81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.076907 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "07986dae-e60d-4809-88fe-cbd86b27ef81" (UID: "07986dae-e60d-4809-88fe-cbd86b27ef81"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.078390 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07986dae-e60d-4809-88fe-cbd86b27ef81-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "07986dae-e60d-4809-88fe-cbd86b27ef81" (UID: "07986dae-e60d-4809-88fe-cbd86b27ef81"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.106884 4637 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/07986dae-e60d-4809-88fe-cbd86b27ef81-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.106946 4637 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.106957 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.106967 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7gm6\" (UniqueName: \"kubernetes.io/projected/07986dae-e60d-4809-88fe-cbd86b27ef81-kube-api-access-k7gm6\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.106977 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07986dae-e60d-4809-88fe-cbd86b27ef81-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.337282 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" event={"ID":"07986dae-e60d-4809-88fe-cbd86b27ef81","Type":"ContainerDied","Data":"92dd0715b1a27da5947292f717df1a2253bc73a0cd8e6530e41b9436f7e97ba2"} Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.337323 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92dd0715b1a27da5947292f717df1a2253bc73a0cd8e6530e41b9436f7e97ba2" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.338013 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n9fmc" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.452775 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m"] Dec 01 15:21:00 crc kubenswrapper[4637]: E1201 15:21:00.453267 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07986dae-e60d-4809-88fe-cbd86b27ef81" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.453292 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="07986dae-e60d-4809-88fe-cbd86b27ef81" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.453566 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="07986dae-e60d-4809-88fe-cbd86b27ef81" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.454430 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.458847 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.458978 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.459045 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.459076 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.459139 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.459231 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.490008 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m"] Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.523761 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.523830 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.523854 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.523881 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmp9\" (UniqueName: \"kubernetes.io/projected/01f54aa2-e74c-40e5-a386-da5ea69b918c-kube-api-access-kwmp9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.523963 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.524065 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.626151 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.626214 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.626242 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmp9\" (UniqueName: \"kubernetes.io/projected/01f54aa2-e74c-40e5-a386-da5ea69b918c-kube-api-access-kwmp9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.626318 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.626366 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.626444 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.633924 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.634060 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.634711 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.634783 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.637992 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.647723 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmp9\" (UniqueName: \"kubernetes.io/projected/01f54aa2-e74c-40e5-a386-da5ea69b918c-kube-api-access-kwmp9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:00 crc kubenswrapper[4637]: I1201 15:21:00.780750 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:01 crc kubenswrapper[4637]: I1201 15:21:01.175482 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m"] Dec 01 15:21:01 crc kubenswrapper[4637]: I1201 15:21:01.348872 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" event={"ID":"01f54aa2-e74c-40e5-a386-da5ea69b918c","Type":"ContainerStarted","Data":"291cc17c9ead4f44759916dc41525dd03c2987cad6bf9b0ffdfc57f5b356c8ba"} Dec 01 15:21:02 crc kubenswrapper[4637]: I1201 15:21:02.362258 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" event={"ID":"01f54aa2-e74c-40e5-a386-da5ea69b918c","Type":"ContainerStarted","Data":"75d29818572f2edf1e3cc0a481ed3b71c828e2c137c20e04ecda8c4c463a330e"} Dec 01 15:21:02 crc kubenswrapper[4637]: I1201 15:21:02.391462 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" podStartSLOduration=1.7856709259999999 podStartE2EDuration="2.391442763s" podCreationTimestamp="2025-12-01 15:21:00 +0000 UTC" firstStartedPulling="2025-12-01 15:21:01.188151633 +0000 UTC m=+2111.705860461" lastFinishedPulling="2025-12-01 15:21:01.79392347 +0000 UTC m=+2112.311632298" observedRunningTime="2025-12-01 15:21:02.382710136 +0000 UTC m=+2112.900418964" watchObservedRunningTime="2025-12-01 15:21:02.391442763 +0000 UTC m=+2112.909151591" Dec 01 15:21:55 crc kubenswrapper[4637]: I1201 15:21:55.879434 4637 generic.go:334] "Generic (PLEG): container finished" podID="01f54aa2-e74c-40e5-a386-da5ea69b918c" containerID="75d29818572f2edf1e3cc0a481ed3b71c828e2c137c20e04ecda8c4c463a330e" exitCode=0 Dec 01 15:21:55 crc kubenswrapper[4637]: I1201 15:21:55.879535 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" event={"ID":"01f54aa2-e74c-40e5-a386-da5ea69b918c","Type":"ContainerDied","Data":"75d29818572f2edf1e3cc0a481ed3b71c828e2c137c20e04ecda8c4c463a330e"} Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.318590 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.378561 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-metadata-combined-ca-bundle\") pod \"01f54aa2-e74c-40e5-a386-da5ea69b918c\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.378615 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-ssh-key\") pod \"01f54aa2-e74c-40e5-a386-da5ea69b918c\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.378806 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwmp9\" (UniqueName: \"kubernetes.io/projected/01f54aa2-e74c-40e5-a386-da5ea69b918c-kube-api-access-kwmp9\") pod \"01f54aa2-e74c-40e5-a386-da5ea69b918c\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.406557 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f54aa2-e74c-40e5-a386-da5ea69b918c-kube-api-access-kwmp9" (OuterVolumeSpecName: "kube-api-access-kwmp9") pod "01f54aa2-e74c-40e5-a386-da5ea69b918c" (UID: "01f54aa2-e74c-40e5-a386-da5ea69b918c"). InnerVolumeSpecName "kube-api-access-kwmp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.406656 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "01f54aa2-e74c-40e5-a386-da5ea69b918c" (UID: "01f54aa2-e74c-40e5-a386-da5ea69b918c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.436109 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "01f54aa2-e74c-40e5-a386-da5ea69b918c" (UID: "01f54aa2-e74c-40e5-a386-da5ea69b918c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.483648 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-nova-metadata-neutron-config-0\") pod \"01f54aa2-e74c-40e5-a386-da5ea69b918c\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.484493 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-inventory\") pod \"01f54aa2-e74c-40e5-a386-da5ea69b918c\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.484764 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"01f54aa2-e74c-40e5-a386-da5ea69b918c\" (UID: \"01f54aa2-e74c-40e5-a386-da5ea69b918c\") " Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.485479 4637 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.485616 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.485678 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwmp9\" (UniqueName: \"kubernetes.io/projected/01f54aa2-e74c-40e5-a386-da5ea69b918c-kube-api-access-kwmp9\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.513090 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-inventory" (OuterVolumeSpecName: "inventory") pod "01f54aa2-e74c-40e5-a386-da5ea69b918c" (UID: "01f54aa2-e74c-40e5-a386-da5ea69b918c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.514049 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "01f54aa2-e74c-40e5-a386-da5ea69b918c" (UID: "01f54aa2-e74c-40e5-a386-da5ea69b918c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.519235 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "01f54aa2-e74c-40e5-a386-da5ea69b918c" (UID: "01f54aa2-e74c-40e5-a386-da5ea69b918c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.586974 4637 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.587223 4637 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.587282 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f54aa2-e74c-40e5-a386-da5ea69b918c-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.897373 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" event={"ID":"01f54aa2-e74c-40e5-a386-da5ea69b918c","Type":"ContainerDied","Data":"291cc17c9ead4f44759916dc41525dd03c2987cad6bf9b0ffdfc57f5b356c8ba"} Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.897884 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="291cc17c9ead4f44759916dc41525dd03c2987cad6bf9b0ffdfc57f5b356c8ba" Dec 01 15:21:57 crc kubenswrapper[4637]: I1201 15:21:57.897416 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.008287 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp"] Dec 01 15:21:58 crc kubenswrapper[4637]: E1201 15:21:58.008700 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f54aa2-e74c-40e5-a386-da5ea69b918c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.008722 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f54aa2-e74c-40e5-a386-da5ea69b918c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.008919 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f54aa2-e74c-40e5-a386-da5ea69b918c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.009610 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.014200 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.014506 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.016768 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.017186 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.018845 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.031609 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp"] Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.096524 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w49bn\" (UniqueName: \"kubernetes.io/projected/6a16b3a0-82a0-4cc6-820a-6c084408566f-kube-api-access-w49bn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.096610 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.096744 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.096803 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.096875 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.198282 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.198332 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.198370 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.198482 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w49bn\" (UniqueName: \"kubernetes.io/projected/6a16b3a0-82a0-4cc6-820a-6c084408566f-kube-api-access-w49bn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.198513 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.203579 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.205288 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.206542 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.210502 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.263050 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w49bn\" (UniqueName: \"kubernetes.io/projected/6a16b3a0-82a0-4cc6-820a-6c084408566f-kube-api-access-w49bn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:58 crc kubenswrapper[4637]: I1201 15:21:58.334224 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:21:59 crc kubenswrapper[4637]: I1201 15:21:59.096519 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp"] Dec 01 15:21:59 crc kubenswrapper[4637]: I1201 15:21:59.917925 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" event={"ID":"6a16b3a0-82a0-4cc6-820a-6c084408566f","Type":"ContainerStarted","Data":"1f1e1d56820073edd3dde5671a5d4dc1e2607fe0d1738ede705146c248de92e6"} Dec 01 15:22:00 crc kubenswrapper[4637]: I1201 15:22:00.930160 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" event={"ID":"6a16b3a0-82a0-4cc6-820a-6c084408566f","Type":"ContainerStarted","Data":"0cea48d5970bcbfa5d54505868e6e205ef5fceee76f468bdc201387445b001b0"} Dec 01 15:22:00 crc kubenswrapper[4637]: I1201 15:22:00.956754 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" podStartSLOduration=3.275528833 podStartE2EDuration="3.956731227s" podCreationTimestamp="2025-12-01 15:21:57 +0000 UTC" firstStartedPulling="2025-12-01 15:21:59.099084139 +0000 UTC m=+2169.616792977" lastFinishedPulling="2025-12-01 15:21:59.780286543 +0000 UTC m=+2170.297995371" observedRunningTime="2025-12-01 15:22:00.953625353 +0000 UTC m=+2171.471334181" watchObservedRunningTime="2025-12-01 15:22:00.956731227 +0000 UTC m=+2171.474440075" Dec 01 15:22:15 crc kubenswrapper[4637]: I1201 15:22:15.613796 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:22:15 crc kubenswrapper[4637]: I1201 15:22:15.615239 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:22:45 crc kubenswrapper[4637]: I1201 15:22:45.614123 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:22:45 crc kubenswrapper[4637]: I1201 15:22:45.615636 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:23:15 crc kubenswrapper[4637]: I1201 15:23:15.613834 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:23:15 crc kubenswrapper[4637]: I1201 15:23:15.614386 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:23:15 crc kubenswrapper[4637]: I1201 15:23:15.614433 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:23:15 crc kubenswrapper[4637]: I1201 15:23:15.615224 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:23:15 crc kubenswrapper[4637]: I1201 15:23:15.615280 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" gracePeriod=600 Dec 01 15:23:15 crc kubenswrapper[4637]: E1201 15:23:15.752855 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:23:16 crc kubenswrapper[4637]: I1201 15:23:16.586597 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" exitCode=0 Dec 01 15:23:16 crc kubenswrapper[4637]: I1201 15:23:16.586657 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38"} Dec 01 15:23:16 crc kubenswrapper[4637]: I1201 15:23:16.586697 4637 scope.go:117] "RemoveContainer" containerID="c5fd51a07a1e38b401f16a4fb3406e418b65fdf2fe7bd0db652e567f20de551e" Dec 01 15:23:16 crc kubenswrapper[4637]: I1201 15:23:16.590106 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:23:16 crc kubenswrapper[4637]: E1201 15:23:16.590792 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:23:28 crc kubenswrapper[4637]: I1201 15:23:28.771683 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:23:28 crc kubenswrapper[4637]: E1201 15:23:28.772405 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:23:40 crc kubenswrapper[4637]: I1201 15:23:40.771151 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:23:40 crc kubenswrapper[4637]: E1201 15:23:40.771905 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.203678 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-526zb"] Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.207395 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.229811 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-526zb"] Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.250351 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7lr\" (UniqueName: \"kubernetes.io/projected/3670b285-f612-4054-8c54-92fe0c297d81-kube-api-access-4r7lr\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.251731 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-utilities\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.251919 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-catalog-content\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.354590 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-catalog-content\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.354697 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r7lr\" (UniqueName: \"kubernetes.io/projected/3670b285-f612-4054-8c54-92fe0c297d81-kube-api-access-4r7lr\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.354727 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-utilities\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.355239 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-utilities\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.355640 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-catalog-content\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.373097 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r7lr\" (UniqueName: \"kubernetes.io/projected/3670b285-f612-4054-8c54-92fe0c297d81-kube-api-access-4r7lr\") pod \"redhat-operators-526zb\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:42 crc kubenswrapper[4637]: I1201 15:23:42.527258 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:43 crc kubenswrapper[4637]: I1201 15:23:43.027136 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-526zb"] Dec 01 15:23:43 crc kubenswrapper[4637]: I1201 15:23:43.841299 4637 generic.go:334] "Generic (PLEG): container finished" podID="3670b285-f612-4054-8c54-92fe0c297d81" containerID="fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde" exitCode=0 Dec 01 15:23:43 crc kubenswrapper[4637]: I1201 15:23:43.841398 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-526zb" event={"ID":"3670b285-f612-4054-8c54-92fe0c297d81","Type":"ContainerDied","Data":"fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde"} Dec 01 15:23:43 crc kubenswrapper[4637]: I1201 15:23:43.841793 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-526zb" event={"ID":"3670b285-f612-4054-8c54-92fe0c297d81","Type":"ContainerStarted","Data":"e0cd538ed5500585c0b96eb5012766667d461d7163fd22a3044444fd5cdc8d37"} Dec 01 15:23:43 crc kubenswrapper[4637]: I1201 15:23:43.843378 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:23:46 crc kubenswrapper[4637]: I1201 15:23:46.907438 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-526zb" event={"ID":"3670b285-f612-4054-8c54-92fe0c297d81","Type":"ContainerStarted","Data":"16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db"} Dec 01 15:23:49 crc kubenswrapper[4637]: I1201 15:23:49.937802 4637 generic.go:334] "Generic (PLEG): container finished" podID="3670b285-f612-4054-8c54-92fe0c297d81" containerID="16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db" exitCode=0 Dec 01 15:23:49 crc kubenswrapper[4637]: I1201 15:23:49.937837 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-526zb" event={"ID":"3670b285-f612-4054-8c54-92fe0c297d81","Type":"ContainerDied","Data":"16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db"} Dec 01 15:23:50 crc kubenswrapper[4637]: I1201 15:23:50.947841 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-526zb" event={"ID":"3670b285-f612-4054-8c54-92fe0c297d81","Type":"ContainerStarted","Data":"26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55"} Dec 01 15:23:50 crc kubenswrapper[4637]: I1201 15:23:50.973066 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-526zb" podStartSLOduration=2.378129318 podStartE2EDuration="8.973044992s" podCreationTimestamp="2025-12-01 15:23:42 +0000 UTC" firstStartedPulling="2025-12-01 15:23:43.843166465 +0000 UTC m=+2274.360875293" lastFinishedPulling="2025-12-01 15:23:50.438082139 +0000 UTC m=+2280.955790967" observedRunningTime="2025-12-01 15:23:50.967643986 +0000 UTC m=+2281.485352824" watchObservedRunningTime="2025-12-01 15:23:50.973044992 +0000 UTC m=+2281.490753810" Dec 01 15:23:52 crc kubenswrapper[4637]: I1201 15:23:52.528100 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:52 crc kubenswrapper[4637]: I1201 15:23:52.528160 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:23:53 crc kubenswrapper[4637]: I1201 15:23:53.578492 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-526zb" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="registry-server" probeResult="failure" output=< Dec 01 15:23:53 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:23:53 crc kubenswrapper[4637]: > Dec 01 15:23:53 crc kubenswrapper[4637]: I1201 15:23:53.771174 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:23:53 crc kubenswrapper[4637]: E1201 15:23:53.771418 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:24:02 crc kubenswrapper[4637]: I1201 15:24:02.574355 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:24:02 crc kubenswrapper[4637]: I1201 15:24:02.638474 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:24:02 crc kubenswrapper[4637]: I1201 15:24:02.813240 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-526zb"] Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.061426 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-526zb" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="registry-server" containerID="cri-o://26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55" gracePeriod=2 Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.484166 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.512660 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-utilities\") pod \"3670b285-f612-4054-8c54-92fe0c297d81\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.512797 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-catalog-content\") pod \"3670b285-f612-4054-8c54-92fe0c297d81\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.512840 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r7lr\" (UniqueName: \"kubernetes.io/projected/3670b285-f612-4054-8c54-92fe0c297d81-kube-api-access-4r7lr\") pod \"3670b285-f612-4054-8c54-92fe0c297d81\" (UID: \"3670b285-f612-4054-8c54-92fe0c297d81\") " Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.516980 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-utilities" (OuterVolumeSpecName: "utilities") pod "3670b285-f612-4054-8c54-92fe0c297d81" (UID: "3670b285-f612-4054-8c54-92fe0c297d81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.522561 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3670b285-f612-4054-8c54-92fe0c297d81-kube-api-access-4r7lr" (OuterVolumeSpecName: "kube-api-access-4r7lr") pod "3670b285-f612-4054-8c54-92fe0c297d81" (UID: "3670b285-f612-4054-8c54-92fe0c297d81"). InnerVolumeSpecName "kube-api-access-4r7lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.616748 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r7lr\" (UniqueName: \"kubernetes.io/projected/3670b285-f612-4054-8c54-92fe0c297d81-kube-api-access-4r7lr\") on node \"crc\" DevicePath \"\"" Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.616791 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.644507 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3670b285-f612-4054-8c54-92fe0c297d81" (UID: "3670b285-f612-4054-8c54-92fe0c297d81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:24:04 crc kubenswrapper[4637]: I1201 15:24:04.717758 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3670b285-f612-4054-8c54-92fe0c297d81-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.071381 4637 generic.go:334] "Generic (PLEG): container finished" podID="3670b285-f612-4054-8c54-92fe0c297d81" containerID="26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55" exitCode=0 Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.071421 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-526zb" event={"ID":"3670b285-f612-4054-8c54-92fe0c297d81","Type":"ContainerDied","Data":"26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55"} Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.071683 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-526zb" event={"ID":"3670b285-f612-4054-8c54-92fe0c297d81","Type":"ContainerDied","Data":"e0cd538ed5500585c0b96eb5012766667d461d7163fd22a3044444fd5cdc8d37"} Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.071446 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-526zb" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.071705 4637 scope.go:117] "RemoveContainer" containerID="26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.095794 4637 scope.go:117] "RemoveContainer" containerID="16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.106586 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-526zb"] Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.127385 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-526zb"] Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.150209 4637 scope.go:117] "RemoveContainer" containerID="fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.190332 4637 scope.go:117] "RemoveContainer" containerID="26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55" Dec 01 15:24:05 crc kubenswrapper[4637]: E1201 15:24:05.195361 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55\": container with ID starting with 26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55 not found: ID does not exist" containerID="26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.195398 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55"} err="failed to get container status \"26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55\": rpc error: code = NotFound desc = could not find container \"26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55\": container with ID starting with 26e383cd33449bba6657fb2e17b97ea6b1c001882ecb16436ba60bdaedf30e55 not found: ID does not exist" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.195423 4637 scope.go:117] "RemoveContainer" containerID="16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db" Dec 01 15:24:05 crc kubenswrapper[4637]: E1201 15:24:05.195879 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db\": container with ID starting with 16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db not found: ID does not exist" containerID="16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.196008 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db"} err="failed to get container status \"16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db\": rpc error: code = NotFound desc = could not find container \"16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db\": container with ID starting with 16abd7dd18bd1cb10a5b65c5702a3c975f70dd6cee746ad89c888a7a189499db not found: ID does not exist" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.196088 4637 scope.go:117] "RemoveContainer" containerID="fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde" Dec 01 15:24:05 crc kubenswrapper[4637]: E1201 15:24:05.196462 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde\": container with ID starting with fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde not found: ID does not exist" containerID="fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.196493 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde"} err="failed to get container status \"fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde\": rpc error: code = NotFound desc = could not find container \"fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde\": container with ID starting with fc8f5692e85c42cdedbf739e933478c4f84b27be520a579fd2b31a70f2334fde not found: ID does not exist" Dec 01 15:24:05 crc kubenswrapper[4637]: I1201 15:24:05.780619 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3670b285-f612-4054-8c54-92fe0c297d81" path="/var/lib/kubelet/pods/3670b285-f612-4054-8c54-92fe0c297d81/volumes" Dec 01 15:24:08 crc kubenswrapper[4637]: I1201 15:24:08.771445 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:24:08 crc kubenswrapper[4637]: E1201 15:24:08.771976 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:24:21 crc kubenswrapper[4637]: I1201 15:24:21.774625 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:24:21 crc kubenswrapper[4637]: E1201 15:24:21.775795 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:24:33 crc kubenswrapper[4637]: I1201 15:24:33.771056 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:24:33 crc kubenswrapper[4637]: E1201 15:24:33.771733 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:24:47 crc kubenswrapper[4637]: I1201 15:24:47.771710 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:24:47 crc kubenswrapper[4637]: E1201 15:24:47.772445 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:24:59 crc kubenswrapper[4637]: I1201 15:24:59.779233 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:24:59 crc kubenswrapper[4637]: E1201 15:24:59.780024 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:25:11 crc kubenswrapper[4637]: I1201 15:25:11.771813 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:25:11 crc kubenswrapper[4637]: E1201 15:25:11.772658 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:25:25 crc kubenswrapper[4637]: I1201 15:25:25.772504 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:25:25 crc kubenswrapper[4637]: E1201 15:25:25.773803 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:25:39 crc kubenswrapper[4637]: I1201 15:25:39.781402 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:25:39 crc kubenswrapper[4637]: E1201 15:25:39.782111 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:25:52 crc kubenswrapper[4637]: I1201 15:25:52.771924 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:25:52 crc kubenswrapper[4637]: E1201 15:25:52.772670 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:26:05 crc kubenswrapper[4637]: I1201 15:26:05.771102 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:26:05 crc kubenswrapper[4637]: E1201 15:26:05.771940 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:26:17 crc kubenswrapper[4637]: I1201 15:26:17.771837 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:26:17 crc kubenswrapper[4637]: E1201 15:26:17.773430 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:26:29 crc kubenswrapper[4637]: I1201 15:26:29.781039 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:26:29 crc kubenswrapper[4637]: E1201 15:26:29.781837 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:26:42 crc kubenswrapper[4637]: I1201 15:26:42.771745 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:26:42 crc kubenswrapper[4637]: E1201 15:26:42.772636 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:26:57 crc kubenswrapper[4637]: I1201 15:26:57.771514 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:26:57 crc kubenswrapper[4637]: E1201 15:26:57.772392 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:27:07 crc kubenswrapper[4637]: I1201 15:27:07.765538 4637 generic.go:334] "Generic (PLEG): container finished" podID="6a16b3a0-82a0-4cc6-820a-6c084408566f" containerID="0cea48d5970bcbfa5d54505868e6e205ef5fceee76f468bdc201387445b001b0" exitCode=0 Dec 01 15:27:07 crc kubenswrapper[4637]: I1201 15:27:07.765639 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" event={"ID":"6a16b3a0-82a0-4cc6-820a-6c084408566f","Type":"ContainerDied","Data":"0cea48d5970bcbfa5d54505868e6e205ef5fceee76f468bdc201387445b001b0"} Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.190187 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.290052 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-ssh-key\") pod \"6a16b3a0-82a0-4cc6-820a-6c084408566f\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.290303 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w49bn\" (UniqueName: \"kubernetes.io/projected/6a16b3a0-82a0-4cc6-820a-6c084408566f-kube-api-access-w49bn\") pod \"6a16b3a0-82a0-4cc6-820a-6c084408566f\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.290327 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-combined-ca-bundle\") pod \"6a16b3a0-82a0-4cc6-820a-6c084408566f\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.290409 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-inventory\") pod \"6a16b3a0-82a0-4cc6-820a-6c084408566f\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.290474 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-secret-0\") pod \"6a16b3a0-82a0-4cc6-820a-6c084408566f\" (UID: \"6a16b3a0-82a0-4cc6-820a-6c084408566f\") " Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.295480 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6a16b3a0-82a0-4cc6-820a-6c084408566f" (UID: "6a16b3a0-82a0-4cc6-820a-6c084408566f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.296846 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a16b3a0-82a0-4cc6-820a-6c084408566f-kube-api-access-w49bn" (OuterVolumeSpecName: "kube-api-access-w49bn") pod "6a16b3a0-82a0-4cc6-820a-6c084408566f" (UID: "6a16b3a0-82a0-4cc6-820a-6c084408566f"). InnerVolumeSpecName "kube-api-access-w49bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.326364 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-inventory" (OuterVolumeSpecName: "inventory") pod "6a16b3a0-82a0-4cc6-820a-6c084408566f" (UID: "6a16b3a0-82a0-4cc6-820a-6c084408566f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.327106 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "6a16b3a0-82a0-4cc6-820a-6c084408566f" (UID: "6a16b3a0-82a0-4cc6-820a-6c084408566f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.328490 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a16b3a0-82a0-4cc6-820a-6c084408566f" (UID: "6a16b3a0-82a0-4cc6-820a-6c084408566f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.392674 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.392703 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w49bn\" (UniqueName: \"kubernetes.io/projected/6a16b3a0-82a0-4cc6-820a-6c084408566f-kube-api-access-w49bn\") on node \"crc\" DevicePath \"\"" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.392715 4637 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.392724 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.392734 4637 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6a16b3a0-82a0-4cc6-820a-6c084408566f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.781454 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:27:09 crc kubenswrapper[4637]: E1201 15:27:09.781926 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.789624 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" event={"ID":"6a16b3a0-82a0-4cc6-820a-6c084408566f","Type":"ContainerDied","Data":"1f1e1d56820073edd3dde5671a5d4dc1e2607fe0d1738ede705146c248de92e6"} Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.789668 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1e1d56820073edd3dde5671a5d4dc1e2607fe0d1738ede705146c248de92e6" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.789731 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.894673 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5"] Dec 01 15:27:09 crc kubenswrapper[4637]: E1201 15:27:09.895156 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="extract-content" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.895171 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="extract-content" Dec 01 15:27:09 crc kubenswrapper[4637]: E1201 15:27:09.895190 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a16b3a0-82a0-4cc6-820a-6c084408566f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.895200 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a16b3a0-82a0-4cc6-820a-6c084408566f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 15:27:09 crc kubenswrapper[4637]: E1201 15:27:09.895224 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="registry-server" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.895232 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="registry-server" Dec 01 15:27:09 crc kubenswrapper[4637]: E1201 15:27:09.895268 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="extract-utilities" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.895277 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="extract-utilities" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.895481 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a16b3a0-82a0-4cc6-820a-6c084408566f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.895502 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="3670b285-f612-4054-8c54-92fe0c297d81" containerName="registry-server" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.896267 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.899112 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.899384 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.902590 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.902843 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.902614 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.903113 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.910993 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/30d902b2-5e9c-4431-a436-03edbc23458d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.911033 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.911075 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gknmm\" (UniqueName: \"kubernetes.io/projected/30d902b2-5e9c-4431-a436-03edbc23458d-kube-api-access-gknmm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.911145 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.911179 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.911197 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.911218 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.911248 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.911264 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.919546 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:27:09 crc kubenswrapper[4637]: I1201 15:27:09.930531 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5"] Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.011917 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.012003 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.012026 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.012050 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.012076 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.012096 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.012132 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/30d902b2-5e9c-4431-a436-03edbc23458d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.012155 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.012198 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gknmm\" (UniqueName: \"kubernetes.io/projected/30d902b2-5e9c-4431-a436-03edbc23458d-kube-api-access-gknmm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.013196 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/30d902b2-5e9c-4431-a436-03edbc23458d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.021642 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.021720 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.023163 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.024681 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.026703 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.031529 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.037762 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.042666 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gknmm\" (UniqueName: \"kubernetes.io/projected/30d902b2-5e9c-4431-a436-03edbc23458d-kube-api-access-gknmm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-md4m5\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.222139 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.779609 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5"] Dec 01 15:27:10 crc kubenswrapper[4637]: I1201 15:27:10.800752 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" event={"ID":"30d902b2-5e9c-4431-a436-03edbc23458d","Type":"ContainerStarted","Data":"a1a2a45c5ec79b9ac5b10e2e10fd5d1295c653b7a8303cd8ba02b30bc27f872a"} Dec 01 15:27:11 crc kubenswrapper[4637]: I1201 15:27:11.811777 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" event={"ID":"30d902b2-5e9c-4431-a436-03edbc23458d","Type":"ContainerStarted","Data":"830da093235ae2f9770ad6ab1b73eb5f1edcbe385047573db03e06d9175a0b30"} Dec 01 15:27:11 crc kubenswrapper[4637]: I1201 15:27:11.843000 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" podStartSLOduration=2.259893804 podStartE2EDuration="2.842974695s" podCreationTimestamp="2025-12-01 15:27:09 +0000 UTC" firstStartedPulling="2025-12-01 15:27:10.78528081 +0000 UTC m=+2481.302989638" lastFinishedPulling="2025-12-01 15:27:11.368361701 +0000 UTC m=+2481.886070529" observedRunningTime="2025-12-01 15:27:11.833739846 +0000 UTC m=+2482.351448724" watchObservedRunningTime="2025-12-01 15:27:11.842974695 +0000 UTC m=+2482.360683523" Dec 01 15:27:21 crc kubenswrapper[4637]: I1201 15:27:21.772314 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:27:21 crc kubenswrapper[4637]: E1201 15:27:21.774025 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:27:33 crc kubenswrapper[4637]: I1201 15:27:33.771781 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:27:33 crc kubenswrapper[4637]: E1201 15:27:33.772646 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:27:46 crc kubenswrapper[4637]: I1201 15:27:46.771589 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:27:46 crc kubenswrapper[4637]: E1201 15:27:46.772240 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.198060 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bx5xm"] Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.206473 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.216783 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bx5xm"] Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.314453 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-catalog-content\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.314882 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvvk\" (UniqueName: \"kubernetes.io/projected/b335e55e-7c47-4930-8f05-831d7cd4f8b9-kube-api-access-gvvvk\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.315103 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-utilities\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.394459 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m75jt"] Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.397179 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.408769 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m75jt"] Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.417079 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvvvk\" (UniqueName: \"kubernetes.io/projected/b335e55e-7c47-4930-8f05-831d7cd4f8b9-kube-api-access-gvvvk\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.417386 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-utilities\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.417502 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-catalog-content\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.418082 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-catalog-content\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.418151 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-utilities\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.448780 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvvvk\" (UniqueName: \"kubernetes.io/projected/b335e55e-7c47-4930-8f05-831d7cd4f8b9-kube-api-access-gvvvk\") pod \"community-operators-bx5xm\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.518722 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fps9x\" (UniqueName: \"kubernetes.io/projected/0af51b78-9c76-4662-9f19-3dc1aced3c8f-kube-api-access-fps9x\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.519426 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-utilities\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.519700 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-catalog-content\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.525468 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.621717 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-utilities\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.621771 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-catalog-content\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.621886 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fps9x\" (UniqueName: \"kubernetes.io/projected/0af51b78-9c76-4662-9f19-3dc1aced3c8f-kube-api-access-fps9x\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.623054 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-utilities\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.623366 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-catalog-content\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.671248 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fps9x\" (UniqueName: \"kubernetes.io/projected/0af51b78-9c76-4662-9f19-3dc1aced3c8f-kube-api-access-fps9x\") pod \"certified-operators-m75jt\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:51 crc kubenswrapper[4637]: I1201 15:27:51.716396 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:27:52 crc kubenswrapper[4637]: I1201 15:27:52.234211 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bx5xm"] Dec 01 15:27:52 crc kubenswrapper[4637]: I1201 15:27:52.280888 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m75jt"] Dec 01 15:27:52 crc kubenswrapper[4637]: W1201 15:27:52.351185 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0af51b78_9c76_4662_9f19_3dc1aced3c8f.slice/crio-398c65cf3beb3273f3fdc7e508d621c5645a6f887077313524e61b6111bdc15c WatchSource:0}: Error finding container 398c65cf3beb3273f3fdc7e508d621c5645a6f887077313524e61b6111bdc15c: Status 404 returned error can't find the container with id 398c65cf3beb3273f3fdc7e508d621c5645a6f887077313524e61b6111bdc15c Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.241245 4637 generic.go:334] "Generic (PLEG): container finished" podID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerID="eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72" exitCode=0 Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.241338 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx5xm" event={"ID":"b335e55e-7c47-4930-8f05-831d7cd4f8b9","Type":"ContainerDied","Data":"eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72"} Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.241625 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx5xm" event={"ID":"b335e55e-7c47-4930-8f05-831d7cd4f8b9","Type":"ContainerStarted","Data":"757de1a21f5df22f66ca27cdacf5fda840e156ef2712edf0bc0c915f6b3c3096"} Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.245477 4637 generic.go:334] "Generic (PLEG): container finished" podID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerID="fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be" exitCode=0 Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.245516 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75jt" event={"ID":"0af51b78-9c76-4662-9f19-3dc1aced3c8f","Type":"ContainerDied","Data":"fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be"} Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.245540 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75jt" event={"ID":"0af51b78-9c76-4662-9f19-3dc1aced3c8f","Type":"ContainerStarted","Data":"398c65cf3beb3273f3fdc7e508d621c5645a6f887077313524e61b6111bdc15c"} Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.596120 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q4zlh"] Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.598975 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.628672 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4zlh"] Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.677380 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pd78\" (UniqueName: \"kubernetes.io/projected/59829fcd-70b9-4fbc-97da-da3b148700ae-kube-api-access-8pd78\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.677631 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-utilities\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.677776 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-catalog-content\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.782251 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-utilities\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.782303 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-catalog-content\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.782403 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pd78\" (UniqueName: \"kubernetes.io/projected/59829fcd-70b9-4fbc-97da-da3b148700ae-kube-api-access-8pd78\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.783006 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-utilities\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.783031 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-catalog-content\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.806130 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pd78\" (UniqueName: \"kubernetes.io/projected/59829fcd-70b9-4fbc-97da-da3b148700ae-kube-api-access-8pd78\") pod \"redhat-marketplace-q4zlh\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:53 crc kubenswrapper[4637]: I1201 15:27:53.923985 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:27:54 crc kubenswrapper[4637]: I1201 15:27:54.454793 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4zlh"] Dec 01 15:27:54 crc kubenswrapper[4637]: W1201 15:27:54.461067 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59829fcd_70b9_4fbc_97da_da3b148700ae.slice/crio-e0be6fdcdae835cd75254ec890a31ffef69a9dc64eb514a13300412022699e06 WatchSource:0}: Error finding container e0be6fdcdae835cd75254ec890a31ffef69a9dc64eb514a13300412022699e06: Status 404 returned error can't find the container with id e0be6fdcdae835cd75254ec890a31ffef69a9dc64eb514a13300412022699e06 Dec 01 15:27:55 crc kubenswrapper[4637]: I1201 15:27:55.274799 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx5xm" event={"ID":"b335e55e-7c47-4930-8f05-831d7cd4f8b9","Type":"ContainerStarted","Data":"41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127"} Dec 01 15:27:55 crc kubenswrapper[4637]: I1201 15:27:55.288650 4637 generic.go:334] "Generic (PLEG): container finished" podID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerID="4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6" exitCode=0 Dec 01 15:27:55 crc kubenswrapper[4637]: I1201 15:27:55.288715 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4zlh" event={"ID":"59829fcd-70b9-4fbc-97da-da3b148700ae","Type":"ContainerDied","Data":"4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6"} Dec 01 15:27:55 crc kubenswrapper[4637]: I1201 15:27:55.288774 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4zlh" event={"ID":"59829fcd-70b9-4fbc-97da-da3b148700ae","Type":"ContainerStarted","Data":"e0be6fdcdae835cd75254ec890a31ffef69a9dc64eb514a13300412022699e06"} Dec 01 15:27:55 crc kubenswrapper[4637]: I1201 15:27:55.297717 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75jt" event={"ID":"0af51b78-9c76-4662-9f19-3dc1aced3c8f","Type":"ContainerStarted","Data":"1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988"} Dec 01 15:27:56 crc kubenswrapper[4637]: I1201 15:27:56.308781 4637 generic.go:334] "Generic (PLEG): container finished" podID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerID="1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988" exitCode=0 Dec 01 15:27:56 crc kubenswrapper[4637]: I1201 15:27:56.309036 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75jt" event={"ID":"0af51b78-9c76-4662-9f19-3dc1aced3c8f","Type":"ContainerDied","Data":"1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988"} Dec 01 15:27:56 crc kubenswrapper[4637]: I1201 15:27:56.313164 4637 generic.go:334] "Generic (PLEG): container finished" podID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerID="41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127" exitCode=0 Dec 01 15:27:56 crc kubenswrapper[4637]: I1201 15:27:56.313274 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx5xm" event={"ID":"b335e55e-7c47-4930-8f05-831d7cd4f8b9","Type":"ContainerDied","Data":"41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127"} Dec 01 15:27:57 crc kubenswrapper[4637]: I1201 15:27:57.325908 4637 generic.go:334] "Generic (PLEG): container finished" podID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerID="5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a" exitCode=0 Dec 01 15:27:57 crc kubenswrapper[4637]: I1201 15:27:57.325962 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4zlh" event={"ID":"59829fcd-70b9-4fbc-97da-da3b148700ae","Type":"ContainerDied","Data":"5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a"} Dec 01 15:27:57 crc kubenswrapper[4637]: I1201 15:27:57.347666 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75jt" event={"ID":"0af51b78-9c76-4662-9f19-3dc1aced3c8f","Type":"ContainerStarted","Data":"6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4"} Dec 01 15:27:57 crc kubenswrapper[4637]: I1201 15:27:57.376853 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m75jt" podStartSLOduration=2.628098823 podStartE2EDuration="6.376836592s" podCreationTimestamp="2025-12-01 15:27:51 +0000 UTC" firstStartedPulling="2025-12-01 15:27:53.246733078 +0000 UTC m=+2523.764441906" lastFinishedPulling="2025-12-01 15:27:56.995470847 +0000 UTC m=+2527.513179675" observedRunningTime="2025-12-01 15:27:57.370880562 +0000 UTC m=+2527.888589390" watchObservedRunningTime="2025-12-01 15:27:57.376836592 +0000 UTC m=+2527.894545420" Dec 01 15:27:57 crc kubenswrapper[4637]: I1201 15:27:57.772080 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:27:57 crc kubenswrapper[4637]: E1201 15:27:57.772522 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:27:58 crc kubenswrapper[4637]: I1201 15:27:58.357392 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx5xm" event={"ID":"b335e55e-7c47-4930-8f05-831d7cd4f8b9","Type":"ContainerStarted","Data":"ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53"} Dec 01 15:27:58 crc kubenswrapper[4637]: I1201 15:27:58.359954 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4zlh" event={"ID":"59829fcd-70b9-4fbc-97da-da3b148700ae","Type":"ContainerStarted","Data":"738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a"} Dec 01 15:27:58 crc kubenswrapper[4637]: I1201 15:27:58.380175 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bx5xm" podStartSLOduration=2.705542435 podStartE2EDuration="7.38015949s" podCreationTimestamp="2025-12-01 15:27:51 +0000 UTC" firstStartedPulling="2025-12-01 15:27:53.245063983 +0000 UTC m=+2523.762772811" lastFinishedPulling="2025-12-01 15:27:57.919681038 +0000 UTC m=+2528.437389866" observedRunningTime="2025-12-01 15:27:58.374672732 +0000 UTC m=+2528.892381570" watchObservedRunningTime="2025-12-01 15:27:58.38015949 +0000 UTC m=+2528.897868318" Dec 01 15:27:58 crc kubenswrapper[4637]: I1201 15:27:58.412248 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q4zlh" podStartSLOduration=2.87332184 podStartE2EDuration="5.412226456s" podCreationTimestamp="2025-12-01 15:27:53 +0000 UTC" firstStartedPulling="2025-12-01 15:27:55.291210124 +0000 UTC m=+2525.808918952" lastFinishedPulling="2025-12-01 15:27:57.83011474 +0000 UTC m=+2528.347823568" observedRunningTime="2025-12-01 15:27:58.405088773 +0000 UTC m=+2528.922797601" watchObservedRunningTime="2025-12-01 15:27:58.412226456 +0000 UTC m=+2528.929935284" Dec 01 15:28:01 crc kubenswrapper[4637]: I1201 15:28:01.525833 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:28:01 crc kubenswrapper[4637]: I1201 15:28:01.527077 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:28:01 crc kubenswrapper[4637]: I1201 15:28:01.720247 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:28:01 crc kubenswrapper[4637]: I1201 15:28:01.720300 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:28:01 crc kubenswrapper[4637]: I1201 15:28:01.770499 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:28:02 crc kubenswrapper[4637]: I1201 15:28:02.443996 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:28:02 crc kubenswrapper[4637]: I1201 15:28:02.596114 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bx5xm" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="registry-server" probeResult="failure" output=< Dec 01 15:28:02 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:28:02 crc kubenswrapper[4637]: > Dec 01 15:28:03 crc kubenswrapper[4637]: I1201 15:28:03.192496 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m75jt"] Dec 01 15:28:03 crc kubenswrapper[4637]: I1201 15:28:03.925609 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:28:03 crc kubenswrapper[4637]: I1201 15:28:03.926497 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.029119 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.415355 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m75jt" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerName="registry-server" containerID="cri-o://6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4" gracePeriod=2 Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.467450 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.858214 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.938917 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fps9x\" (UniqueName: \"kubernetes.io/projected/0af51b78-9c76-4662-9f19-3dc1aced3c8f-kube-api-access-fps9x\") pod \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.939013 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-catalog-content\") pod \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.939111 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-utilities\") pod \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\" (UID: \"0af51b78-9c76-4662-9f19-3dc1aced3c8f\") " Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.939780 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-utilities" (OuterVolumeSpecName: "utilities") pod "0af51b78-9c76-4662-9f19-3dc1aced3c8f" (UID: "0af51b78-9c76-4662-9f19-3dc1aced3c8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.949431 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af51b78-9c76-4662-9f19-3dc1aced3c8f-kube-api-access-fps9x" (OuterVolumeSpecName: "kube-api-access-fps9x") pod "0af51b78-9c76-4662-9f19-3dc1aced3c8f" (UID: "0af51b78-9c76-4662-9f19-3dc1aced3c8f"). InnerVolumeSpecName "kube-api-access-fps9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:28:04 crc kubenswrapper[4637]: I1201 15:28:04.991200 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0af51b78-9c76-4662-9f19-3dc1aced3c8f" (UID: "0af51b78-9c76-4662-9f19-3dc1aced3c8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.041295 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.041335 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fps9x\" (UniqueName: \"kubernetes.io/projected/0af51b78-9c76-4662-9f19-3dc1aced3c8f-kube-api-access-fps9x\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.041348 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0af51b78-9c76-4662-9f19-3dc1aced3c8f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.424750 4637 generic.go:334] "Generic (PLEG): container finished" podID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerID="6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4" exitCode=0 Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.424784 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75jt" event={"ID":"0af51b78-9c76-4662-9f19-3dc1aced3c8f","Type":"ContainerDied","Data":"6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4"} Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.424829 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75jt" event={"ID":"0af51b78-9c76-4662-9f19-3dc1aced3c8f","Type":"ContainerDied","Data":"398c65cf3beb3273f3fdc7e508d621c5645a6f887077313524e61b6111bdc15c"} Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.424848 4637 scope.go:117] "RemoveContainer" containerID="6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.424799 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m75jt" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.460632 4637 scope.go:117] "RemoveContainer" containerID="1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.467043 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m75jt"] Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.483924 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m75jt"] Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.485466 4637 scope.go:117] "RemoveContainer" containerID="fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.529570 4637 scope.go:117] "RemoveContainer" containerID="6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4" Dec 01 15:28:05 crc kubenswrapper[4637]: E1201 15:28:05.530164 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4\": container with ID starting with 6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4 not found: ID does not exist" containerID="6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.530217 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4"} err="failed to get container status \"6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4\": rpc error: code = NotFound desc = could not find container \"6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4\": container with ID starting with 6536b07a67a149bd596b79781a9f09d33d78a0ab7ed85df65f2165e8280cc1e4 not found: ID does not exist" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.530252 4637 scope.go:117] "RemoveContainer" containerID="1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988" Dec 01 15:28:05 crc kubenswrapper[4637]: E1201 15:28:05.530548 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988\": container with ID starting with 1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988 not found: ID does not exist" containerID="1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.530579 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988"} err="failed to get container status \"1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988\": rpc error: code = NotFound desc = could not find container \"1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988\": container with ID starting with 1617709281fe472bc4c8b54b84ab5c26d11f0f80369f57c29ba6398cbd019988 not found: ID does not exist" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.530601 4637 scope.go:117] "RemoveContainer" containerID="fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be" Dec 01 15:28:05 crc kubenswrapper[4637]: E1201 15:28:05.530832 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be\": container with ID starting with fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be not found: ID does not exist" containerID="fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.530874 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be"} err="failed to get container status \"fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be\": rpc error: code = NotFound desc = could not find container \"fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be\": container with ID starting with fee82de48aac9d49050f7dad9da7ec264d8e7d63ba6de5e1eac69b6cbeda48be not found: ID does not exist" Dec 01 15:28:05 crc kubenswrapper[4637]: I1201 15:28:05.783756 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" path="/var/lib/kubelet/pods/0af51b78-9c76-4662-9f19-3dc1aced3c8f/volumes" Dec 01 15:28:06 crc kubenswrapper[4637]: I1201 15:28:06.388561 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4zlh"] Dec 01 15:28:06 crc kubenswrapper[4637]: I1201 15:28:06.442495 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q4zlh" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerName="registry-server" containerID="cri-o://738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a" gracePeriod=2 Dec 01 15:28:06 crc kubenswrapper[4637]: E1201 15:28:06.635143 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59829fcd_70b9_4fbc_97da_da3b148700ae.slice/crio-738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a.scope\": RecentStats: unable to find data in memory cache]" Dec 01 15:28:06 crc kubenswrapper[4637]: I1201 15:28:06.910090 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:28:06 crc kubenswrapper[4637]: I1201 15:28:06.998604 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-utilities\") pod \"59829fcd-70b9-4fbc-97da-da3b148700ae\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " Dec 01 15:28:06 crc kubenswrapper[4637]: I1201 15:28:06.998805 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-catalog-content\") pod \"59829fcd-70b9-4fbc-97da-da3b148700ae\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " Dec 01 15:28:06 crc kubenswrapper[4637]: I1201 15:28:06.998973 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pd78\" (UniqueName: \"kubernetes.io/projected/59829fcd-70b9-4fbc-97da-da3b148700ae-kube-api-access-8pd78\") pod \"59829fcd-70b9-4fbc-97da-da3b148700ae\" (UID: \"59829fcd-70b9-4fbc-97da-da3b148700ae\") " Dec 01 15:28:06 crc kubenswrapper[4637]: I1201 15:28:06.999537 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-utilities" (OuterVolumeSpecName: "utilities") pod "59829fcd-70b9-4fbc-97da-da3b148700ae" (UID: "59829fcd-70b9-4fbc-97da-da3b148700ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.004565 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59829fcd-70b9-4fbc-97da-da3b148700ae-kube-api-access-8pd78" (OuterVolumeSpecName: "kube-api-access-8pd78") pod "59829fcd-70b9-4fbc-97da-da3b148700ae" (UID: "59829fcd-70b9-4fbc-97da-da3b148700ae"). InnerVolumeSpecName "kube-api-access-8pd78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.021686 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59829fcd-70b9-4fbc-97da-da3b148700ae" (UID: "59829fcd-70b9-4fbc-97da-da3b148700ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.101555 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.101585 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59829fcd-70b9-4fbc-97da-da3b148700ae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.101597 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pd78\" (UniqueName: \"kubernetes.io/projected/59829fcd-70b9-4fbc-97da-da3b148700ae-kube-api-access-8pd78\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.456338 4637 generic.go:334] "Generic (PLEG): container finished" podID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerID="738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a" exitCode=0 Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.456390 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4zlh" event={"ID":"59829fcd-70b9-4fbc-97da-da3b148700ae","Type":"ContainerDied","Data":"738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a"} Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.456430 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4zlh" event={"ID":"59829fcd-70b9-4fbc-97da-da3b148700ae","Type":"ContainerDied","Data":"e0be6fdcdae835cd75254ec890a31ffef69a9dc64eb514a13300412022699e06"} Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.456440 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4zlh" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.456470 4637 scope.go:117] "RemoveContainer" containerID="738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.510771 4637 scope.go:117] "RemoveContainer" containerID="5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.532953 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4zlh"] Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.547741 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4zlh"] Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.562990 4637 scope.go:117] "RemoveContainer" containerID="4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.595864 4637 scope.go:117] "RemoveContainer" containerID="738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a" Dec 01 15:28:07 crc kubenswrapper[4637]: E1201 15:28:07.596630 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a\": container with ID starting with 738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a not found: ID does not exist" containerID="738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.596677 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a"} err="failed to get container status \"738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a\": rpc error: code = NotFound desc = could not find container \"738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a\": container with ID starting with 738fdd22cb4f7159505d0e1879d2abb466818ec12e0a64f934891472a16c0b5a not found: ID does not exist" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.596705 4637 scope.go:117] "RemoveContainer" containerID="5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a" Dec 01 15:28:07 crc kubenswrapper[4637]: E1201 15:28:07.597189 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a\": container with ID starting with 5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a not found: ID does not exist" containerID="5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.597216 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a"} err="failed to get container status \"5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a\": rpc error: code = NotFound desc = could not find container \"5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a\": container with ID starting with 5de2c2d42df2545156775d5ee6d85377fa12e98c8bc2e05ac1f2478344cd065a not found: ID does not exist" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.597241 4637 scope.go:117] "RemoveContainer" containerID="4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6" Dec 01 15:28:07 crc kubenswrapper[4637]: E1201 15:28:07.597565 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6\": container with ID starting with 4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6 not found: ID does not exist" containerID="4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.597600 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6"} err="failed to get container status \"4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6\": rpc error: code = NotFound desc = could not find container \"4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6\": container with ID starting with 4b17d2f3c7b5abc9088ce5825ca55c3b8692b535e399129a3cfdc633a70bc8b6 not found: ID does not exist" Dec 01 15:28:07 crc kubenswrapper[4637]: I1201 15:28:07.784620 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" path="/var/lib/kubelet/pods/59829fcd-70b9-4fbc-97da-da3b148700ae/volumes" Dec 01 15:28:11 crc kubenswrapper[4637]: I1201 15:28:11.575354 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:28:11 crc kubenswrapper[4637]: I1201 15:28:11.632968 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:28:11 crc kubenswrapper[4637]: I1201 15:28:11.812345 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bx5xm"] Dec 01 15:28:12 crc kubenswrapper[4637]: I1201 15:28:12.771704 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:28:12 crc kubenswrapper[4637]: E1201 15:28:12.771997 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:28:13 crc kubenswrapper[4637]: I1201 15:28:13.506515 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bx5xm" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="registry-server" containerID="cri-o://ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53" gracePeriod=2 Dec 01 15:28:13 crc kubenswrapper[4637]: I1201 15:28:13.955028 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.045027 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-utilities\") pod \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.045235 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-catalog-content\") pod \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.045295 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvvvk\" (UniqueName: \"kubernetes.io/projected/b335e55e-7c47-4930-8f05-831d7cd4f8b9-kube-api-access-gvvvk\") pod \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\" (UID: \"b335e55e-7c47-4930-8f05-831d7cd4f8b9\") " Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.047377 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-utilities" (OuterVolumeSpecName: "utilities") pod "b335e55e-7c47-4930-8f05-831d7cd4f8b9" (UID: "b335e55e-7c47-4930-8f05-831d7cd4f8b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.054745 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b335e55e-7c47-4930-8f05-831d7cd4f8b9-kube-api-access-gvvvk" (OuterVolumeSpecName: "kube-api-access-gvvvk") pod "b335e55e-7c47-4930-8f05-831d7cd4f8b9" (UID: "b335e55e-7c47-4930-8f05-831d7cd4f8b9"). InnerVolumeSpecName "kube-api-access-gvvvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.102846 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b335e55e-7c47-4930-8f05-831d7cd4f8b9" (UID: "b335e55e-7c47-4930-8f05-831d7cd4f8b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.149357 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.149411 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b335e55e-7c47-4930-8f05-831d7cd4f8b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.149434 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvvvk\" (UniqueName: \"kubernetes.io/projected/b335e55e-7c47-4930-8f05-831d7cd4f8b9-kube-api-access-gvvvk\") on node \"crc\" DevicePath \"\"" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.521321 4637 generic.go:334] "Generic (PLEG): container finished" podID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerID="ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53" exitCode=0 Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.521366 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx5xm" event={"ID":"b335e55e-7c47-4930-8f05-831d7cd4f8b9","Type":"ContainerDied","Data":"ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53"} Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.521391 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx5xm" event={"ID":"b335e55e-7c47-4930-8f05-831d7cd4f8b9","Type":"ContainerDied","Data":"757de1a21f5df22f66ca27cdacf5fda840e156ef2712edf0bc0c915f6b3c3096"} Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.521409 4637 scope.go:117] "RemoveContainer" containerID="ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.521537 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx5xm" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.558011 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bx5xm"] Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.561588 4637 scope.go:117] "RemoveContainer" containerID="41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.567347 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bx5xm"] Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.586653 4637 scope.go:117] "RemoveContainer" containerID="eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.636593 4637 scope.go:117] "RemoveContainer" containerID="ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53" Dec 01 15:28:14 crc kubenswrapper[4637]: E1201 15:28:14.637182 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53\": container with ID starting with ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53 not found: ID does not exist" containerID="ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.637230 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53"} err="failed to get container status \"ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53\": rpc error: code = NotFound desc = could not find container \"ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53\": container with ID starting with ba1e9bbec530846541425b891b08fbf32271724da4953e1059d3b2f299315c53 not found: ID does not exist" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.637255 4637 scope.go:117] "RemoveContainer" containerID="41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127" Dec 01 15:28:14 crc kubenswrapper[4637]: E1201 15:28:14.637554 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127\": container with ID starting with 41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127 not found: ID does not exist" containerID="41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.637595 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127"} err="failed to get container status \"41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127\": rpc error: code = NotFound desc = could not find container \"41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127\": container with ID starting with 41e44c0b2cfb267dde7e7dcc22214591c7a27f07d509d6c7fb6da4d5c08c3127 not found: ID does not exist" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.637620 4637 scope.go:117] "RemoveContainer" containerID="eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72" Dec 01 15:28:14 crc kubenswrapper[4637]: E1201 15:28:14.638122 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72\": container with ID starting with eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72 not found: ID does not exist" containerID="eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72" Dec 01 15:28:14 crc kubenswrapper[4637]: I1201 15:28:14.638186 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72"} err="failed to get container status \"eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72\": rpc error: code = NotFound desc = could not find container \"eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72\": container with ID starting with eb3fcb267a755965beb65f356b15bd101ae6212c7c67520b2d7f6ffff90c8a72 not found: ID does not exist" Dec 01 15:28:15 crc kubenswrapper[4637]: I1201 15:28:15.781874 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" path="/var/lib/kubelet/pods/b335e55e-7c47-4930-8f05-831d7cd4f8b9/volumes" Dec 01 15:28:25 crc kubenswrapper[4637]: I1201 15:28:25.772070 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:28:26 crc kubenswrapper[4637]: I1201 15:28:26.623558 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"c3517d140040d82880da546ea81861f4bec848a092a89ba6b57a5b1dd0184fc0"} Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.155420 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7"] Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156583 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="extract-content" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156598 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="extract-content" Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156620 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="extract-utilities" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156627 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="extract-utilities" Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156642 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerName="extract-content" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156649 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerName="extract-content" Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156669 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerName="extract-utilities" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156676 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerName="extract-utilities" Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156690 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156695 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156716 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerName="extract-utilities" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156722 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerName="extract-utilities" Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156732 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerName="extract-content" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156739 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerName="extract-content" Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156750 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156755 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: E1201 15:30:00.156768 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156773 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.156990 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="59829fcd-70b9-4fbc-97da-da3b148700ae" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.157003 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b335e55e-7c47-4930-8f05-831d7cd4f8b9" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.157027 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af51b78-9c76-4662-9f19-3dc1aced3c8f" containerName="registry-server" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.157815 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.161832 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.165042 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.166521 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7"] Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.295067 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ab58bb-0724-43fc-b2c9-726f4090bd35-secret-volume\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.295447 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n6b2\" (UniqueName: \"kubernetes.io/projected/40ab58bb-0724-43fc-b2c9-726f4090bd35-kube-api-access-4n6b2\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.295543 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ab58bb-0724-43fc-b2c9-726f4090bd35-config-volume\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.400640 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ab58bb-0724-43fc-b2c9-726f4090bd35-config-volume\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.400801 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ab58bb-0724-43fc-b2c9-726f4090bd35-secret-volume\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.400845 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n6b2\" (UniqueName: \"kubernetes.io/projected/40ab58bb-0724-43fc-b2c9-726f4090bd35-kube-api-access-4n6b2\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.402700 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ab58bb-0724-43fc-b2c9-726f4090bd35-config-volume\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.411057 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ab58bb-0724-43fc-b2c9-726f4090bd35-secret-volume\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.439609 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n6b2\" (UniqueName: \"kubernetes.io/projected/40ab58bb-0724-43fc-b2c9-726f4090bd35-kube-api-access-4n6b2\") pod \"collect-profiles-29410050-dnlj7\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:00 crc kubenswrapper[4637]: I1201 15:30:00.480084 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:01 crc kubenswrapper[4637]: I1201 15:30:01.022536 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7"] Dec 01 15:30:01 crc kubenswrapper[4637]: I1201 15:30:01.530632 4637 generic.go:334] "Generic (PLEG): container finished" podID="40ab58bb-0724-43fc-b2c9-726f4090bd35" containerID="7eec05aecbc7a079cc7e4908b49a335b45eede94a929ca124b98feb8b1d6f380" exitCode=0 Dec 01 15:30:01 crc kubenswrapper[4637]: I1201 15:30:01.530749 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" event={"ID":"40ab58bb-0724-43fc-b2c9-726f4090bd35","Type":"ContainerDied","Data":"7eec05aecbc7a079cc7e4908b49a335b45eede94a929ca124b98feb8b1d6f380"} Dec 01 15:30:01 crc kubenswrapper[4637]: I1201 15:30:01.530865 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" event={"ID":"40ab58bb-0724-43fc-b2c9-726f4090bd35","Type":"ContainerStarted","Data":"7c85f0d9dc18b2f7bb5fbf89df08692a4fee8a77d665a44c67a39a6baf671a8d"} Dec 01 15:30:02 crc kubenswrapper[4637]: I1201 15:30:02.958558 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.055341 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n6b2\" (UniqueName: \"kubernetes.io/projected/40ab58bb-0724-43fc-b2c9-726f4090bd35-kube-api-access-4n6b2\") pod \"40ab58bb-0724-43fc-b2c9-726f4090bd35\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.055710 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ab58bb-0724-43fc-b2c9-726f4090bd35-config-volume\") pod \"40ab58bb-0724-43fc-b2c9-726f4090bd35\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.055738 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ab58bb-0724-43fc-b2c9-726f4090bd35-secret-volume\") pod \"40ab58bb-0724-43fc-b2c9-726f4090bd35\" (UID: \"40ab58bb-0724-43fc-b2c9-726f4090bd35\") " Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.056362 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ab58bb-0724-43fc-b2c9-726f4090bd35-config-volume" (OuterVolumeSpecName: "config-volume") pod "40ab58bb-0724-43fc-b2c9-726f4090bd35" (UID: "40ab58bb-0724-43fc-b2c9-726f4090bd35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.067414 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ab58bb-0724-43fc-b2c9-726f4090bd35-kube-api-access-4n6b2" (OuterVolumeSpecName: "kube-api-access-4n6b2") pod "40ab58bb-0724-43fc-b2c9-726f4090bd35" (UID: "40ab58bb-0724-43fc-b2c9-726f4090bd35"). InnerVolumeSpecName "kube-api-access-4n6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.076579 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ab58bb-0724-43fc-b2c9-726f4090bd35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40ab58bb-0724-43fc-b2c9-726f4090bd35" (UID: "40ab58bb-0724-43fc-b2c9-726f4090bd35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.157795 4637 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ab58bb-0724-43fc-b2c9-726f4090bd35-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.157828 4637 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40ab58bb-0724-43fc-b2c9-726f4090bd35-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.157839 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n6b2\" (UniqueName: \"kubernetes.io/projected/40ab58bb-0724-43fc-b2c9-726f4090bd35-kube-api-access-4n6b2\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.552546 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" event={"ID":"40ab58bb-0724-43fc-b2c9-726f4090bd35","Type":"ContainerDied","Data":"7c85f0d9dc18b2f7bb5fbf89df08692a4fee8a77d665a44c67a39a6baf671a8d"} Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.552590 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c85f0d9dc18b2f7bb5fbf89df08692a4fee8a77d665a44c67a39a6baf671a8d" Dec 01 15:30:03 crc kubenswrapper[4637]: I1201 15:30:03.552659 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7" Dec 01 15:30:04 crc kubenswrapper[4637]: I1201 15:30:04.050362 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr"] Dec 01 15:30:04 crc kubenswrapper[4637]: I1201 15:30:04.058636 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410005-5pnzr"] Dec 01 15:30:05 crc kubenswrapper[4637]: I1201 15:30:05.785133 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="251bd5c6-88f1-46eb-8a76-434c8e7a1e70" path="/var/lib/kubelet/pods/251bd5c6-88f1-46eb-8a76-434c8e7a1e70/volumes" Dec 01 15:30:22 crc kubenswrapper[4637]: I1201 15:30:22.727531 4637 generic.go:334] "Generic (PLEG): container finished" podID="30d902b2-5e9c-4431-a436-03edbc23458d" containerID="830da093235ae2f9770ad6ab1b73eb5f1edcbe385047573db03e06d9175a0b30" exitCode=0 Dec 01 15:30:22 crc kubenswrapper[4637]: I1201 15:30:22.728242 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" event={"ID":"30d902b2-5e9c-4431-a436-03edbc23458d","Type":"ContainerDied","Data":"830da093235ae2f9770ad6ab1b73eb5f1edcbe385047573db03e06d9175a0b30"} Dec 01 15:30:22 crc kubenswrapper[4637]: I1201 15:30:22.932704 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-84f489b6b7-wswv6" podUID="f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.207518 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.307787 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/30d902b2-5e9c-4431-a436-03edbc23458d-nova-extra-config-0\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.307832 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-ssh-key\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.307864 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-0\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.307888 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-combined-ca-bundle\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.307918 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-1\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.308802 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-1\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.309162 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-0\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.309260 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-inventory\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.309307 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gknmm\" (UniqueName: \"kubernetes.io/projected/30d902b2-5e9c-4431-a436-03edbc23458d-kube-api-access-gknmm\") pod \"30d902b2-5e9c-4431-a436-03edbc23458d\" (UID: \"30d902b2-5e9c-4431-a436-03edbc23458d\") " Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.314825 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.345109 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.345340 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d902b2-5e9c-4431-a436-03edbc23458d-kube-api-access-gknmm" (OuterVolumeSpecName: "kube-api-access-gknmm") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "kube-api-access-gknmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.345603 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.345731 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.346384 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.346862 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.351719 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-inventory" (OuterVolumeSpecName: "inventory") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.374501 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d902b2-5e9c-4431-a436-03edbc23458d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "30d902b2-5e9c-4431-a436-03edbc23458d" (UID: "30d902b2-5e9c-4431-a436-03edbc23458d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412348 4637 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412391 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412401 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gknmm\" (UniqueName: \"kubernetes.io/projected/30d902b2-5e9c-4431-a436-03edbc23458d-kube-api-access-gknmm\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412411 4637 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/30d902b2-5e9c-4431-a436-03edbc23458d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412448 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412458 4637 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412465 4637 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412473 4637 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.412482 4637 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30d902b2-5e9c-4431-a436-03edbc23458d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.747328 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" event={"ID":"30d902b2-5e9c-4431-a436-03edbc23458d","Type":"ContainerDied","Data":"a1a2a45c5ec79b9ac5b10e2e10fd5d1295c653b7a8303cd8ba02b30bc27f872a"} Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.747647 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a2a45c5ec79b9ac5b10e2e10fd5d1295c653b7a8303cd8ba02b30bc27f872a" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.747370 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-md4m5" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.853594 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg"] Dec 01 15:30:24 crc kubenswrapper[4637]: E1201 15:30:24.854057 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d902b2-5e9c-4431-a436-03edbc23458d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.854074 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d902b2-5e9c-4431-a436-03edbc23458d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 15:30:24 crc kubenswrapper[4637]: E1201 15:30:24.854089 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ab58bb-0724-43fc-b2c9-726f4090bd35" containerName="collect-profiles" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.854097 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ab58bb-0724-43fc-b2c9-726f4090bd35" containerName="collect-profiles" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.854416 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d902b2-5e9c-4431-a436-03edbc23458d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.854448 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ab58bb-0724-43fc-b2c9-726f4090bd35" containerName="collect-profiles" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.855217 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.861329 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.861385 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.861422 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.861629 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.861767 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lt5wx" Dec 01 15:30:24 crc kubenswrapper[4637]: I1201 15:30:24.871505 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg"] Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.023045 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.023099 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.023129 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrj5x\" (UniqueName: \"kubernetes.io/projected/48718ab6-39c1-430f-ac3c-711d073d32f9-kube-api-access-jrj5x\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.023146 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.023195 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.023266 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.023295 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.124952 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.125015 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.125059 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.125099 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.125134 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrj5x\" (UniqueName: \"kubernetes.io/projected/48718ab6-39c1-430f-ac3c-711d073d32f9-kube-api-access-jrj5x\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.125157 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.125220 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.129541 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.129542 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.129908 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.132685 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.136925 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.144232 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.147048 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrj5x\" (UniqueName: \"kubernetes.io/projected/48718ab6-39c1-430f-ac3c-711d073d32f9-kube-api-access-jrj5x\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.180464 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.753984 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg"] Dec 01 15:30:25 crc kubenswrapper[4637]: W1201 15:30:25.759734 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48718ab6_39c1_430f_ac3c_711d073d32f9.slice/crio-f43f163bda16ac194f036c6bb969d29f81d795c8683ba2dc92924d0b0c477333 WatchSource:0}: Error finding container f43f163bda16ac194f036c6bb969d29f81d795c8683ba2dc92924d0b0c477333: Status 404 returned error can't find the container with id f43f163bda16ac194f036c6bb969d29f81d795c8683ba2dc92924d0b0c477333 Dec 01 15:30:25 crc kubenswrapper[4637]: I1201 15:30:25.764308 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:30:26 crc kubenswrapper[4637]: I1201 15:30:26.776511 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" event={"ID":"48718ab6-39c1-430f-ac3c-711d073d32f9","Type":"ContainerStarted","Data":"f43f163bda16ac194f036c6bb969d29f81d795c8683ba2dc92924d0b0c477333"} Dec 01 15:30:27 crc kubenswrapper[4637]: I1201 15:30:27.813616 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" event={"ID":"48718ab6-39c1-430f-ac3c-711d073d32f9","Type":"ContainerStarted","Data":"8a355b2c61fa275b375881cb6928871bbc75024fff1e130e3e05eaf67663c32e"} Dec 01 15:30:27 crc kubenswrapper[4637]: I1201 15:30:27.853886 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" podStartSLOduration=2.8648339419999997 podStartE2EDuration="3.853865064s" podCreationTimestamp="2025-12-01 15:30:24 +0000 UTC" firstStartedPulling="2025-12-01 15:30:25.763992854 +0000 UTC m=+2676.281701692" lastFinishedPulling="2025-12-01 15:30:26.753023986 +0000 UTC m=+2677.270732814" observedRunningTime="2025-12-01 15:30:27.841410857 +0000 UTC m=+2678.359119695" watchObservedRunningTime="2025-12-01 15:30:27.853865064 +0000 UTC m=+2678.371573902" Dec 01 15:30:34 crc kubenswrapper[4637]: I1201 15:30:34.454256 4637 scope.go:117] "RemoveContainer" containerID="3a2ff7f0cd922548e868fdc91f7c3964802ca860ef52df09d07873d1a5cf1e2d" Dec 01 15:30:45 crc kubenswrapper[4637]: I1201 15:30:45.613541 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:30:45 crc kubenswrapper[4637]: I1201 15:30:45.614128 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:31:15 crc kubenswrapper[4637]: I1201 15:31:15.613628 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:31:15 crc kubenswrapper[4637]: I1201 15:31:15.616454 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:31:45 crc kubenswrapper[4637]: I1201 15:31:45.613178 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:31:45 crc kubenswrapper[4637]: I1201 15:31:45.613724 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:31:45 crc kubenswrapper[4637]: I1201 15:31:45.613785 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:31:45 crc kubenswrapper[4637]: I1201 15:31:45.614516 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3517d140040d82880da546ea81861f4bec848a092a89ba6b57a5b1dd0184fc0"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:31:45 crc kubenswrapper[4637]: I1201 15:31:45.614569 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://c3517d140040d82880da546ea81861f4bec848a092a89ba6b57a5b1dd0184fc0" gracePeriod=600 Dec 01 15:31:46 crc kubenswrapper[4637]: I1201 15:31:46.518157 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="c3517d140040d82880da546ea81861f4bec848a092a89ba6b57a5b1dd0184fc0" exitCode=0 Dec 01 15:31:46 crc kubenswrapper[4637]: I1201 15:31:46.518240 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"c3517d140040d82880da546ea81861f4bec848a092a89ba6b57a5b1dd0184fc0"} Dec 01 15:31:46 crc kubenswrapper[4637]: I1201 15:31:46.519214 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62"} Dec 01 15:31:46 crc kubenswrapper[4637]: I1201 15:31:46.519252 4637 scope.go:117] "RemoveContainer" containerID="c83a6721847a89c667789d304298c8f1648889799048f28eff79db2ca92e0d38" Dec 01 15:33:45 crc kubenswrapper[4637]: I1201 15:33:45.613342 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:33:45 crc kubenswrapper[4637]: I1201 15:33:45.613873 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:33:48 crc kubenswrapper[4637]: I1201 15:33:48.642896 4637 generic.go:334] "Generic (PLEG): container finished" podID="48718ab6-39c1-430f-ac3c-711d073d32f9" containerID="8a355b2c61fa275b375881cb6928871bbc75024fff1e130e3e05eaf67663c32e" exitCode=0 Dec 01 15:33:48 crc kubenswrapper[4637]: I1201 15:33:48.643008 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" event={"ID":"48718ab6-39c1-430f-ac3c-711d073d32f9","Type":"ContainerDied","Data":"8a355b2c61fa275b375881cb6928871bbc75024fff1e130e3e05eaf67663c32e"} Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.066894 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.259803 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-2\") pod \"48718ab6-39c1-430f-ac3c-711d073d32f9\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.260739 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-telemetry-combined-ca-bundle\") pod \"48718ab6-39c1-430f-ac3c-711d073d32f9\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.260828 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-1\") pod \"48718ab6-39c1-430f-ac3c-711d073d32f9\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.260856 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ssh-key\") pod \"48718ab6-39c1-430f-ac3c-711d073d32f9\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.260882 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-0\") pod \"48718ab6-39c1-430f-ac3c-711d073d32f9\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.260917 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrj5x\" (UniqueName: \"kubernetes.io/projected/48718ab6-39c1-430f-ac3c-711d073d32f9-kube-api-access-jrj5x\") pod \"48718ab6-39c1-430f-ac3c-711d073d32f9\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.260991 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-inventory\") pod \"48718ab6-39c1-430f-ac3c-711d073d32f9\" (UID: \"48718ab6-39c1-430f-ac3c-711d073d32f9\") " Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.293150 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "48718ab6-39c1-430f-ac3c-711d073d32f9" (UID: "48718ab6-39c1-430f-ac3c-711d073d32f9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.311657 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48718ab6-39c1-430f-ac3c-711d073d32f9-kube-api-access-jrj5x" (OuterVolumeSpecName: "kube-api-access-jrj5x") pod "48718ab6-39c1-430f-ac3c-711d073d32f9" (UID: "48718ab6-39c1-430f-ac3c-711d073d32f9"). InnerVolumeSpecName "kube-api-access-jrj5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.364433 4637 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.364459 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrj5x\" (UniqueName: \"kubernetes.io/projected/48718ab6-39c1-430f-ac3c-711d073d32f9-kube-api-access-jrj5x\") on node \"crc\" DevicePath \"\"" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.372087 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "48718ab6-39c1-430f-ac3c-711d073d32f9" (UID: "48718ab6-39c1-430f-ac3c-711d073d32f9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.372411 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "48718ab6-39c1-430f-ac3c-711d073d32f9" (UID: "48718ab6-39c1-430f-ac3c-711d073d32f9"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.387306 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "48718ab6-39c1-430f-ac3c-711d073d32f9" (UID: "48718ab6-39c1-430f-ac3c-711d073d32f9"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.389813 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-inventory" (OuterVolumeSpecName: "inventory") pod "48718ab6-39c1-430f-ac3c-711d073d32f9" (UID: "48718ab6-39c1-430f-ac3c-711d073d32f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.389896 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "48718ab6-39c1-430f-ac3c-711d073d32f9" (UID: "48718ab6-39c1-430f-ac3c-711d073d32f9"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.467057 4637 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.467086 4637 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.467099 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.467109 4637 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.467119 4637 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48718ab6-39c1-430f-ac3c-711d073d32f9-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.663493 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" event={"ID":"48718ab6-39c1-430f-ac3c-711d073d32f9","Type":"ContainerDied","Data":"f43f163bda16ac194f036c6bb969d29f81d795c8683ba2dc92924d0b0c477333"} Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.663766 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f43f163bda16ac194f036c6bb969d29f81d795c8683ba2dc92924d0b0c477333" Dec 01 15:33:50 crc kubenswrapper[4637]: I1201 15:33:50.663553 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg" Dec 01 15:34:03 crc kubenswrapper[4637]: I1201 15:34:03.163836 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-df4gw" podUID="2a407577-a89c-4bd1-9e97-0140f2ea2c40" containerName="registry-server" probeResult="failure" output=< Dec 01 15:34:03 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:34:03 crc kubenswrapper[4637]: > Dec 01 15:34:15 crc kubenswrapper[4637]: I1201 15:34:15.618252 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:34:15 crc kubenswrapper[4637]: I1201 15:34:15.618711 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.724196 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 15:34:43 crc kubenswrapper[4637]: E1201 15:34:43.725243 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48718ab6-39c1-430f-ac3c-711d073d32f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.725264 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="48718ab6-39c1-430f-ac3c-711d073d32f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.725521 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="48718ab6-39c1-430f-ac3c-711d073d32f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.726353 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.728730 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.728763 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.734834 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.735468 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-4wp8n" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.742491 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925491 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925563 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925602 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925639 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925707 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925837 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925867 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925890 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzl79\" (UniqueName: \"kubernetes.io/projected/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-kube-api-access-gzl79\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:43 crc kubenswrapper[4637]: I1201 15:34:43.925915 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-config-data\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.028168 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.028673 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.029011 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.029171 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.029312 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzl79\" (UniqueName: \"kubernetes.io/projected/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-kube-api-access-gzl79\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.029460 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-config-data\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.029516 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.029695 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.030077 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.030324 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.030507 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.030735 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-config-data\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.031345 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.031558 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.041025 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.041388 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.048835 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzl79\" (UniqueName: \"kubernetes.io/projected/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-kube-api-access-gzl79\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.050144 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.065657 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.350037 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 15:34:44 crc kubenswrapper[4637]: I1201 15:34:44.819548 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 15:34:44 crc kubenswrapper[4637]: W1201 15:34:44.834305 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod151da5f8_6a6e_4d06_b6a5_de2982ed8da5.slice/crio-4bb86a95f87fb38398d89480ea85ef3afb4fc35b3442cc54a08d3557a40520b0 WatchSource:0}: Error finding container 4bb86a95f87fb38398d89480ea85ef3afb4fc35b3442cc54a08d3557a40520b0: Status 404 returned error can't find the container with id 4bb86a95f87fb38398d89480ea85ef3afb4fc35b3442cc54a08d3557a40520b0 Dec 01 15:34:45 crc kubenswrapper[4637]: I1201 15:34:45.555570 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"151da5f8-6a6e-4d06-b6a5-de2982ed8da5","Type":"ContainerStarted","Data":"4bb86a95f87fb38398d89480ea85ef3afb4fc35b3442cc54a08d3557a40520b0"} Dec 01 15:34:45 crc kubenswrapper[4637]: I1201 15:34:45.613707 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:34:45 crc kubenswrapper[4637]: I1201 15:34:45.613790 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:34:45 crc kubenswrapper[4637]: I1201 15:34:45.613848 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:34:45 crc kubenswrapper[4637]: I1201 15:34:45.615316 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:34:45 crc kubenswrapper[4637]: I1201 15:34:45.615395 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" gracePeriod=600 Dec 01 15:34:45 crc kubenswrapper[4637]: E1201 15:34:45.746461 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:34:46 crc kubenswrapper[4637]: I1201 15:34:46.566820 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" exitCode=0 Dec 01 15:34:46 crc kubenswrapper[4637]: I1201 15:34:46.566864 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62"} Dec 01 15:34:46 crc kubenswrapper[4637]: I1201 15:34:46.566903 4637 scope.go:117] "RemoveContainer" containerID="c3517d140040d82880da546ea81861f4bec848a092a89ba6b57a5b1dd0184fc0" Dec 01 15:34:46 crc kubenswrapper[4637]: I1201 15:34:46.567880 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:34:46 crc kubenswrapper[4637]: E1201 15:34:46.568147 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.497746 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pg9zg"] Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.500759 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.515975 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-utilities\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.516049 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-catalog-content\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.516131 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j9r7\" (UniqueName: \"kubernetes.io/projected/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-kube-api-access-4j9r7\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.539771 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pg9zg"] Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.616813 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-utilities\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.616914 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-catalog-content\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.617456 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-catalog-content\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.617452 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-utilities\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.617574 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j9r7\" (UniqueName: \"kubernetes.io/projected/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-kube-api-access-4j9r7\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.669364 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j9r7\" (UniqueName: \"kubernetes.io/projected/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-kube-api-access-4j9r7\") pod \"redhat-operators-pg9zg\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:56 crc kubenswrapper[4637]: I1201 15:34:56.852747 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:34:57 crc kubenswrapper[4637]: I1201 15:34:57.479589 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pg9zg"] Dec 01 15:34:57 crc kubenswrapper[4637]: I1201 15:34:57.726979 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg9zg" event={"ID":"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb","Type":"ContainerStarted","Data":"a7259c8f4e50a18f900068c51029faab193a4aea0d2a089bddb52d2581a426d6"} Dec 01 15:34:58 crc kubenswrapper[4637]: I1201 15:34:58.748806 4637 generic.go:334] "Generic (PLEG): container finished" podID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerID="e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be" exitCode=0 Dec 01 15:34:58 crc kubenswrapper[4637]: I1201 15:34:58.750107 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg9zg" event={"ID":"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb","Type":"ContainerDied","Data":"e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be"} Dec 01 15:35:00 crc kubenswrapper[4637]: I1201 15:35:00.772198 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:35:00 crc kubenswrapper[4637]: E1201 15:35:00.780267 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:35:00 crc kubenswrapper[4637]: I1201 15:35:00.793349 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg9zg" event={"ID":"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb","Type":"ContainerStarted","Data":"09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e"} Dec 01 15:35:03 crc kubenswrapper[4637]: I1201 15:35:03.845095 4637 generic.go:334] "Generic (PLEG): container finished" podID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerID="09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e" exitCode=0 Dec 01 15:35:03 crc kubenswrapper[4637]: I1201 15:35:03.845389 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg9zg" event={"ID":"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb","Type":"ContainerDied","Data":"09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e"} Dec 01 15:35:13 crc kubenswrapper[4637]: I1201 15:35:13.772070 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:35:13 crc kubenswrapper[4637]: E1201 15:35:13.772697 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:35:26 crc kubenswrapper[4637]: I1201 15:35:26.776566 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:35:26 crc kubenswrapper[4637]: E1201 15:35:26.777573 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:35:27 crc kubenswrapper[4637]: I1201 15:35:27.797640 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:35:27 crc kubenswrapper[4637]: E1201 15:35:27.880990 4637 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 01 15:35:27 crc kubenswrapper[4637]: E1201 15:35:27.881698 4637 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzl79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(151da5f8-6a6e-4d06-b6a5-de2982ed8da5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 15:35:27 crc kubenswrapper[4637]: E1201 15:35:27.882912 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="151da5f8-6a6e-4d06-b6a5-de2982ed8da5" Dec 01 15:35:28 crc kubenswrapper[4637]: E1201 15:35:28.108418 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="151da5f8-6a6e-4d06-b6a5-de2982ed8da5" Dec 01 15:35:29 crc kubenswrapper[4637]: I1201 15:35:29.117898 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg9zg" event={"ID":"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb","Type":"ContainerStarted","Data":"d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0"} Dec 01 15:35:29 crc kubenswrapper[4637]: I1201 15:35:29.144703 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pg9zg" podStartSLOduration=3.429787183 podStartE2EDuration="33.144680434s" podCreationTimestamp="2025-12-01 15:34:56 +0000 UTC" firstStartedPulling="2025-12-01 15:34:58.755278486 +0000 UTC m=+2949.272987314" lastFinishedPulling="2025-12-01 15:35:28.470171737 +0000 UTC m=+2978.987880565" observedRunningTime="2025-12-01 15:35:29.139180655 +0000 UTC m=+2979.656889493" watchObservedRunningTime="2025-12-01 15:35:29.144680434 +0000 UTC m=+2979.662389262" Dec 01 15:35:36 crc kubenswrapper[4637]: I1201 15:35:36.856248 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:35:36 crc kubenswrapper[4637]: I1201 15:35:36.856829 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:35:37 crc kubenswrapper[4637]: I1201 15:35:37.771663 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:35:37 crc kubenswrapper[4637]: E1201 15:35:37.772313 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:35:37 crc kubenswrapper[4637]: I1201 15:35:37.927951 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pg9zg" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="registry-server" probeResult="failure" output=< Dec 01 15:35:37 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:35:37 crc kubenswrapper[4637]: > Dec 01 15:35:41 crc kubenswrapper[4637]: I1201 15:35:41.269246 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 01 15:35:43 crc kubenswrapper[4637]: I1201 15:35:43.250648 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"151da5f8-6a6e-4d06-b6a5-de2982ed8da5","Type":"ContainerStarted","Data":"abf8047e5f00122a2f25f305c92c84829e664a906d9ce0905f3bab4e13b783cf"} Dec 01 15:35:43 crc kubenswrapper[4637]: I1201 15:35:43.278791 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.850743293 podStartE2EDuration="1m1.278771664s" podCreationTimestamp="2025-12-01 15:34:42 +0000 UTC" firstStartedPulling="2025-12-01 15:34:44.838124234 +0000 UTC m=+2935.355833052" lastFinishedPulling="2025-12-01 15:35:41.266152595 +0000 UTC m=+2991.783861423" observedRunningTime="2025-12-01 15:35:43.265508506 +0000 UTC m=+2993.783217344" watchObservedRunningTime="2025-12-01 15:35:43.278771664 +0000 UTC m=+2993.796480492" Dec 01 15:35:46 crc kubenswrapper[4637]: I1201 15:35:46.903969 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:35:46 crc kubenswrapper[4637]: I1201 15:35:46.955523 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:35:47 crc kubenswrapper[4637]: I1201 15:35:47.143380 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pg9zg"] Dec 01 15:35:48 crc kubenswrapper[4637]: I1201 15:35:48.309164 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pg9zg" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="registry-server" containerID="cri-o://d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0" gracePeriod=2 Dec 01 15:35:48 crc kubenswrapper[4637]: I1201 15:35:48.805062 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:35:48 crc kubenswrapper[4637]: I1201 15:35:48.943255 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-utilities\") pod \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " Dec 01 15:35:48 crc kubenswrapper[4637]: I1201 15:35:48.943519 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-catalog-content\") pod \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " Dec 01 15:35:48 crc kubenswrapper[4637]: I1201 15:35:48.943585 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j9r7\" (UniqueName: \"kubernetes.io/projected/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-kube-api-access-4j9r7\") pod \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\" (UID: \"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb\") " Dec 01 15:35:48 crc kubenswrapper[4637]: I1201 15:35:48.944150 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-utilities" (OuterVolumeSpecName: "utilities") pod "fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" (UID: "fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:35:48 crc kubenswrapper[4637]: I1201 15:35:48.954152 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-kube-api-access-4j9r7" (OuterVolumeSpecName: "kube-api-access-4j9r7") pod "fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" (UID: "fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb"). InnerVolumeSpecName "kube-api-access-4j9r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.046471 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j9r7\" (UniqueName: \"kubernetes.io/projected/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-kube-api-access-4j9r7\") on node \"crc\" DevicePath \"\"" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.046546 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.068476 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" (UID: "fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.149269 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.322694 4637 generic.go:334] "Generic (PLEG): container finished" podID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerID="d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0" exitCode=0 Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.322740 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg9zg" event={"ID":"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb","Type":"ContainerDied","Data":"d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0"} Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.322775 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg9zg" event={"ID":"fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb","Type":"ContainerDied","Data":"a7259c8f4e50a18f900068c51029faab193a4aea0d2a089bddb52d2581a426d6"} Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.322813 4637 scope.go:117] "RemoveContainer" containerID="d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.324326 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg9zg" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.348660 4637 scope.go:117] "RemoveContainer" containerID="09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.373176 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pg9zg"] Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.374824 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pg9zg"] Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.382437 4637 scope.go:117] "RemoveContainer" containerID="e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.443569 4637 scope.go:117] "RemoveContainer" containerID="d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0" Dec 01 15:35:49 crc kubenswrapper[4637]: E1201 15:35:49.448733 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0\": container with ID starting with d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0 not found: ID does not exist" containerID="d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.448774 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0"} err="failed to get container status \"d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0\": rpc error: code = NotFound desc = could not find container \"d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0\": container with ID starting with d041545783f748b7e92672e08c6d81893def1be1f145803c5beb39e8ff2897a0 not found: ID does not exist" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.448796 4637 scope.go:117] "RemoveContainer" containerID="09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e" Dec 01 15:35:49 crc kubenswrapper[4637]: E1201 15:35:49.449064 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e\": container with ID starting with 09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e not found: ID does not exist" containerID="09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.449089 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e"} err="failed to get container status \"09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e\": rpc error: code = NotFound desc = could not find container \"09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e\": container with ID starting with 09162f90ce58aeef001fe5f4a20a2377c374e4d292d242ac23ad6ec7bd96301e not found: ID does not exist" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.449102 4637 scope.go:117] "RemoveContainer" containerID="e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be" Dec 01 15:35:49 crc kubenswrapper[4637]: E1201 15:35:49.449278 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be\": container with ID starting with e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be not found: ID does not exist" containerID="e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.449297 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be"} err="failed to get container status \"e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be\": rpc error: code = NotFound desc = could not find container \"e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be\": container with ID starting with e59fb39f46ff9b03a665e7c13633066488302baeda25aa5a29f4de5d35afa5be not found: ID does not exist" Dec 01 15:35:49 crc kubenswrapper[4637]: I1201 15:35:49.839275 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" path="/var/lib/kubelet/pods/fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb/volumes" Dec 01 15:35:50 crc kubenswrapper[4637]: I1201 15:35:50.771706 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:35:50 crc kubenswrapper[4637]: E1201 15:35:50.771969 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:36:04 crc kubenswrapper[4637]: I1201 15:36:04.771328 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:36:04 crc kubenswrapper[4637]: E1201 15:36:04.772288 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:36:18 crc kubenswrapper[4637]: I1201 15:36:18.771454 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:36:18 crc kubenswrapper[4637]: E1201 15:36:18.772301 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:36:30 crc kubenswrapper[4637]: I1201 15:36:30.772329 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:36:30 crc kubenswrapper[4637]: E1201 15:36:30.773127 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:36:44 crc kubenswrapper[4637]: I1201 15:36:44.772031 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:36:44 crc kubenswrapper[4637]: E1201 15:36:44.772924 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:36:56 crc kubenswrapper[4637]: I1201 15:36:56.771990 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:36:56 crc kubenswrapper[4637]: E1201 15:36:56.772805 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:37:10 crc kubenswrapper[4637]: I1201 15:37:10.771914 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:37:10 crc kubenswrapper[4637]: E1201 15:37:10.773855 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:37:25 crc kubenswrapper[4637]: I1201 15:37:25.772630 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:37:25 crc kubenswrapper[4637]: E1201 15:37:25.773291 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:37:36 crc kubenswrapper[4637]: I1201 15:37:36.771709 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:37:36 crc kubenswrapper[4637]: E1201 15:37:36.772502 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:37:48 crc kubenswrapper[4637]: I1201 15:37:48.772392 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:37:48 crc kubenswrapper[4637]: E1201 15:37:48.773224 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:38:00 crc kubenswrapper[4637]: I1201 15:38:00.771627 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:38:00 crc kubenswrapper[4637]: E1201 15:38:00.772424 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:38:11 crc kubenswrapper[4637]: I1201 15:38:11.771591 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:38:11 crc kubenswrapper[4637]: E1201 15:38:11.772494 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:38:26 crc kubenswrapper[4637]: I1201 15:38:26.771753 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:38:26 crc kubenswrapper[4637]: E1201 15:38:26.772695 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:38:39 crc kubenswrapper[4637]: I1201 15:38:39.780424 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:38:39 crc kubenswrapper[4637]: E1201 15:38:39.781446 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.238555 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fcvzb"] Dec 01 15:38:49 crc kubenswrapper[4637]: E1201 15:38:49.239431 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="extract-content" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.239442 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="extract-content" Dec 01 15:38:49 crc kubenswrapper[4637]: E1201 15:38:49.239466 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="extract-utilities" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.239473 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="extract-utilities" Dec 01 15:38:49 crc kubenswrapper[4637]: E1201 15:38:49.239494 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="registry-server" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.239500 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="registry-server" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.239726 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc0dc2a9-7aa7-4e5c-b1b8-4bd24950f0fb" containerName="registry-server" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.241175 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.262336 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcvzb"] Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.328097 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88n7\" (UniqueName: \"kubernetes.io/projected/00910a03-c961-4720-addf-dfaa3945ad0a-kube-api-access-v88n7\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.328201 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-catalog-content\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.328239 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-utilities\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.429852 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88n7\" (UniqueName: \"kubernetes.io/projected/00910a03-c961-4720-addf-dfaa3945ad0a-kube-api-access-v88n7\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.429945 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-catalog-content\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.429970 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-utilities\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.430481 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-utilities\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.430613 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-catalog-content\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.455145 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88n7\" (UniqueName: \"kubernetes.io/projected/00910a03-c961-4720-addf-dfaa3945ad0a-kube-api-access-v88n7\") pod \"redhat-marketplace-fcvzb\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:49 crc kubenswrapper[4637]: I1201 15:38:49.572906 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:50 crc kubenswrapper[4637]: I1201 15:38:50.202447 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcvzb"] Dec 01 15:38:51 crc kubenswrapper[4637]: I1201 15:38:51.056970 4637 generic.go:334] "Generic (PLEG): container finished" podID="00910a03-c961-4720-addf-dfaa3945ad0a" containerID="9e1c66de7aadbf0d2e77d698cf559613ed922aa6080091f6d940d95813ea1bab" exitCode=0 Dec 01 15:38:51 crc kubenswrapper[4637]: I1201 15:38:51.057280 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcvzb" event={"ID":"00910a03-c961-4720-addf-dfaa3945ad0a","Type":"ContainerDied","Data":"9e1c66de7aadbf0d2e77d698cf559613ed922aa6080091f6d940d95813ea1bab"} Dec 01 15:38:51 crc kubenswrapper[4637]: I1201 15:38:51.057304 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcvzb" event={"ID":"00910a03-c961-4720-addf-dfaa3945ad0a","Type":"ContainerStarted","Data":"61ec84897704bbfb04ca0710595d3962e0d0f0f5c54b8f671f81c603777f4761"} Dec 01 15:38:52 crc kubenswrapper[4637]: I1201 15:38:52.772124 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:38:52 crc kubenswrapper[4637]: E1201 15:38:52.773524 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:38:53 crc kubenswrapper[4637]: I1201 15:38:53.100768 4637 generic.go:334] "Generic (PLEG): container finished" podID="00910a03-c961-4720-addf-dfaa3945ad0a" containerID="43bd1b5b20892605f86b070402764160cbe49b9528dd3c420dfb871e930dd18a" exitCode=0 Dec 01 15:38:53 crc kubenswrapper[4637]: I1201 15:38:53.100815 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcvzb" event={"ID":"00910a03-c961-4720-addf-dfaa3945ad0a","Type":"ContainerDied","Data":"43bd1b5b20892605f86b070402764160cbe49b9528dd3c420dfb871e930dd18a"} Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.047796 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdx99"] Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.054514 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.119031 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdx99"] Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.133427 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcvzb" event={"ID":"00910a03-c961-4720-addf-dfaa3945ad0a","Type":"ContainerStarted","Data":"0d854c578e1e2a385285b1a1415e6572fd98ae5313f7d2579fb88e06f6882da5"} Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.140436 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckfs5\" (UniqueName: \"kubernetes.io/projected/b527ae14-8c68-4a01-8716-d93a84e49280-kube-api-access-ckfs5\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.140518 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-utilities\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.140625 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-catalog-content\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.175859 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fcvzb" podStartSLOduration=2.614941399 podStartE2EDuration="5.175828365s" podCreationTimestamp="2025-12-01 15:38:49 +0000 UTC" firstStartedPulling="2025-12-01 15:38:51.05877591 +0000 UTC m=+3181.576484738" lastFinishedPulling="2025-12-01 15:38:53.619662876 +0000 UTC m=+3184.137371704" observedRunningTime="2025-12-01 15:38:54.159712791 +0000 UTC m=+3184.677421619" watchObservedRunningTime="2025-12-01 15:38:54.175828365 +0000 UTC m=+3184.693537193" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.244991 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfs5\" (UniqueName: \"kubernetes.io/projected/b527ae14-8c68-4a01-8716-d93a84e49280-kube-api-access-ckfs5\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.245055 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-utilities\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.245137 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-catalog-content\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.245543 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-utilities\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.245755 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-catalog-content\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.265908 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckfs5\" (UniqueName: \"kubernetes.io/projected/b527ae14-8c68-4a01-8716-d93a84e49280-kube-api-access-ckfs5\") pod \"certified-operators-fdx99\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:54 crc kubenswrapper[4637]: I1201 15:38:54.389639 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:38:55 crc kubenswrapper[4637]: I1201 15:38:55.010162 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdx99"] Dec 01 15:38:55 crc kubenswrapper[4637]: I1201 15:38:55.142200 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdx99" event={"ID":"b527ae14-8c68-4a01-8716-d93a84e49280","Type":"ContainerStarted","Data":"7a1f071ad69825f828d8dc88e4cff04c5456c138a26c06292bceb05d7b6d6985"} Dec 01 15:38:56 crc kubenswrapper[4637]: I1201 15:38:56.152853 4637 generic.go:334] "Generic (PLEG): container finished" podID="b527ae14-8c68-4a01-8716-d93a84e49280" containerID="fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32" exitCode=0 Dec 01 15:38:56 crc kubenswrapper[4637]: I1201 15:38:56.152949 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdx99" event={"ID":"b527ae14-8c68-4a01-8716-d93a84e49280","Type":"ContainerDied","Data":"fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32"} Dec 01 15:38:57 crc kubenswrapper[4637]: I1201 15:38:57.167991 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdx99" event={"ID":"b527ae14-8c68-4a01-8716-d93a84e49280","Type":"ContainerStarted","Data":"d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511"} Dec 01 15:38:58 crc kubenswrapper[4637]: I1201 15:38:58.178078 4637 generic.go:334] "Generic (PLEG): container finished" podID="b527ae14-8c68-4a01-8716-d93a84e49280" containerID="d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511" exitCode=0 Dec 01 15:38:58 crc kubenswrapper[4637]: I1201 15:38:58.178130 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdx99" event={"ID":"b527ae14-8c68-4a01-8716-d93a84e49280","Type":"ContainerDied","Data":"d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511"} Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.191764 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdx99" event={"ID":"b527ae14-8c68-4a01-8716-d93a84e49280","Type":"ContainerStarted","Data":"d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a"} Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.222533 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdx99" podStartSLOduration=3.76008186 podStartE2EDuration="6.222491453s" podCreationTimestamp="2025-12-01 15:38:53 +0000 UTC" firstStartedPulling="2025-12-01 15:38:56.155068204 +0000 UTC m=+3186.672777032" lastFinishedPulling="2025-12-01 15:38:58.617477797 +0000 UTC m=+3189.135186625" observedRunningTime="2025-12-01 15:38:59.2205403 +0000 UTC m=+3189.738249148" watchObservedRunningTime="2025-12-01 15:38:59.222491453 +0000 UTC m=+3189.740200281" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.415861 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rlzjl"] Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.417771 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.448406 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rlzjl"] Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.479109 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzxb\" (UniqueName: \"kubernetes.io/projected/8d6184db-7543-4b75-a017-877fa23fdb22-kube-api-access-8vzxb\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.479322 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-utilities\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.479373 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-catalog-content\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.573581 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.573660 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.581063 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-utilities\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.581131 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-catalog-content\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.581193 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vzxb\" (UniqueName: \"kubernetes.io/projected/8d6184db-7543-4b75-a017-877fa23fdb22-kube-api-access-8vzxb\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.582086 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-utilities\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.582344 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-catalog-content\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.621897 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vzxb\" (UniqueName: \"kubernetes.io/projected/8d6184db-7543-4b75-a017-877fa23fdb22-kube-api-access-8vzxb\") pod \"community-operators-rlzjl\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.651013 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:38:59 crc kubenswrapper[4637]: I1201 15:38:59.737630 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:39:00 crc kubenswrapper[4637]: I1201 15:39:00.279277 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:39:00 crc kubenswrapper[4637]: I1201 15:39:00.315156 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rlzjl"] Dec 01 15:39:01 crc kubenswrapper[4637]: I1201 15:39:01.216609 4637 generic.go:334] "Generic (PLEG): container finished" podID="8d6184db-7543-4b75-a017-877fa23fdb22" containerID="683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea" exitCode=0 Dec 01 15:39:01 crc kubenswrapper[4637]: I1201 15:39:01.216729 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlzjl" event={"ID":"8d6184db-7543-4b75-a017-877fa23fdb22","Type":"ContainerDied","Data":"683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea"} Dec 01 15:39:01 crc kubenswrapper[4637]: I1201 15:39:01.217491 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlzjl" event={"ID":"8d6184db-7543-4b75-a017-877fa23fdb22","Type":"ContainerStarted","Data":"c607b8a3474a3e0c4a21dc9f474e2c968bf5eaeb3e0f19916ab525d44935aa4e"} Dec 01 15:39:02 crc kubenswrapper[4637]: I1201 15:39:02.232561 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlzjl" event={"ID":"8d6184db-7543-4b75-a017-877fa23fdb22","Type":"ContainerStarted","Data":"70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b"} Dec 01 15:39:02 crc kubenswrapper[4637]: I1201 15:39:02.410473 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcvzb"] Dec 01 15:39:02 crc kubenswrapper[4637]: I1201 15:39:02.410833 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fcvzb" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" containerName="registry-server" containerID="cri-o://0d854c578e1e2a385285b1a1415e6572fd98ae5313f7d2579fb88e06f6882da5" gracePeriod=2 Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.249680 4637 generic.go:334] "Generic (PLEG): container finished" podID="00910a03-c961-4720-addf-dfaa3945ad0a" containerID="0d854c578e1e2a385285b1a1415e6572fd98ae5313f7d2579fb88e06f6882da5" exitCode=0 Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.249800 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcvzb" event={"ID":"00910a03-c961-4720-addf-dfaa3945ad0a","Type":"ContainerDied","Data":"0d854c578e1e2a385285b1a1415e6572fd98ae5313f7d2579fb88e06f6882da5"} Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.253639 4637 generic.go:334] "Generic (PLEG): container finished" podID="8d6184db-7543-4b75-a017-877fa23fdb22" containerID="70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b" exitCode=0 Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.253677 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlzjl" event={"ID":"8d6184db-7543-4b75-a017-877fa23fdb22","Type":"ContainerDied","Data":"70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b"} Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.721477 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.810987 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-utilities\") pod \"00910a03-c961-4720-addf-dfaa3945ad0a\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.811187 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v88n7\" (UniqueName: \"kubernetes.io/projected/00910a03-c961-4720-addf-dfaa3945ad0a-kube-api-access-v88n7\") pod \"00910a03-c961-4720-addf-dfaa3945ad0a\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.811217 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-catalog-content\") pod \"00910a03-c961-4720-addf-dfaa3945ad0a\" (UID: \"00910a03-c961-4720-addf-dfaa3945ad0a\") " Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.813381 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-utilities" (OuterVolumeSpecName: "utilities") pod "00910a03-c961-4720-addf-dfaa3945ad0a" (UID: "00910a03-c961-4720-addf-dfaa3945ad0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.835671 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00910a03-c961-4720-addf-dfaa3945ad0a-kube-api-access-v88n7" (OuterVolumeSpecName: "kube-api-access-v88n7") pod "00910a03-c961-4720-addf-dfaa3945ad0a" (UID: "00910a03-c961-4720-addf-dfaa3945ad0a"). InnerVolumeSpecName "kube-api-access-v88n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.845568 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00910a03-c961-4720-addf-dfaa3945ad0a" (UID: "00910a03-c961-4720-addf-dfaa3945ad0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.913649 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.913693 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v88n7\" (UniqueName: \"kubernetes.io/projected/00910a03-c961-4720-addf-dfaa3945ad0a-kube-api-access-v88n7\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:03 crc kubenswrapper[4637]: I1201 15:39:03.913706 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00910a03-c961-4720-addf-dfaa3945ad0a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.267097 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlzjl" event={"ID":"8d6184db-7543-4b75-a017-877fa23fdb22","Type":"ContainerStarted","Data":"fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea"} Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.269735 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcvzb" event={"ID":"00910a03-c961-4720-addf-dfaa3945ad0a","Type":"ContainerDied","Data":"61ec84897704bbfb04ca0710595d3962e0d0f0f5c54b8f671f81c603777f4761"} Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.269784 4637 scope.go:117] "RemoveContainer" containerID="0d854c578e1e2a385285b1a1415e6572fd98ae5313f7d2579fb88e06f6882da5" Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.269988 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcvzb" Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.301607 4637 scope.go:117] "RemoveContainer" containerID="43bd1b5b20892605f86b070402764160cbe49b9528dd3c420dfb871e930dd18a" Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.312537 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rlzjl" podStartSLOduration=2.693972031 podStartE2EDuration="5.312511699s" podCreationTimestamp="2025-12-01 15:38:59 +0000 UTC" firstStartedPulling="2025-12-01 15:39:01.21875152 +0000 UTC m=+3191.736460348" lastFinishedPulling="2025-12-01 15:39:03.837291188 +0000 UTC m=+3194.355000016" observedRunningTime="2025-12-01 15:39:04.304825141 +0000 UTC m=+3194.822533969" watchObservedRunningTime="2025-12-01 15:39:04.312511699 +0000 UTC m=+3194.830220527" Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.367358 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcvzb"] Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.373133 4637 scope.go:117] "RemoveContainer" containerID="9e1c66de7aadbf0d2e77d698cf559613ed922aa6080091f6d940d95813ea1bab" Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.389727 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.390578 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcvzb"] Dec 01 15:39:04 crc kubenswrapper[4637]: I1201 15:39:04.390822 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:39:05 crc kubenswrapper[4637]: I1201 15:39:05.461013 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fdx99" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="registry-server" probeResult="failure" output=< Dec 01 15:39:05 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:39:05 crc kubenswrapper[4637]: > Dec 01 15:39:05 crc kubenswrapper[4637]: I1201 15:39:05.784176 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" path="/var/lib/kubelet/pods/00910a03-c961-4720-addf-dfaa3945ad0a/volumes" Dec 01 15:39:06 crc kubenswrapper[4637]: I1201 15:39:06.772410 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:39:06 crc kubenswrapper[4637]: E1201 15:39:06.772705 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:39:09 crc kubenswrapper[4637]: I1201 15:39:09.738986 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:39:09 crc kubenswrapper[4637]: I1201 15:39:09.739598 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:39:09 crc kubenswrapper[4637]: I1201 15:39:09.820744 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:39:10 crc kubenswrapper[4637]: I1201 15:39:10.401010 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:39:10 crc kubenswrapper[4637]: I1201 15:39:10.461392 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rlzjl"] Dec 01 15:39:12 crc kubenswrapper[4637]: I1201 15:39:12.364330 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rlzjl" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" containerName="registry-server" containerID="cri-o://fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea" gracePeriod=2 Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.099655 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.188970 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vzxb\" (UniqueName: \"kubernetes.io/projected/8d6184db-7543-4b75-a017-877fa23fdb22-kube-api-access-8vzxb\") pod \"8d6184db-7543-4b75-a017-877fa23fdb22\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.189174 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-catalog-content\") pod \"8d6184db-7543-4b75-a017-877fa23fdb22\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.189292 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-utilities\") pod \"8d6184db-7543-4b75-a017-877fa23fdb22\" (UID: \"8d6184db-7543-4b75-a017-877fa23fdb22\") " Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.190158 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-utilities" (OuterVolumeSpecName: "utilities") pod "8d6184db-7543-4b75-a017-877fa23fdb22" (UID: "8d6184db-7543-4b75-a017-877fa23fdb22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.198536 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6184db-7543-4b75-a017-877fa23fdb22-kube-api-access-8vzxb" (OuterVolumeSpecName: "kube-api-access-8vzxb") pod "8d6184db-7543-4b75-a017-877fa23fdb22" (UID: "8d6184db-7543-4b75-a017-877fa23fdb22"). InnerVolumeSpecName "kube-api-access-8vzxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.265634 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d6184db-7543-4b75-a017-877fa23fdb22" (UID: "8d6184db-7543-4b75-a017-877fa23fdb22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.291578 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vzxb\" (UniqueName: \"kubernetes.io/projected/8d6184db-7543-4b75-a017-877fa23fdb22-kube-api-access-8vzxb\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.291615 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.291627 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6184db-7543-4b75-a017-877fa23fdb22-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.374676 4637 generic.go:334] "Generic (PLEG): container finished" podID="8d6184db-7543-4b75-a017-877fa23fdb22" containerID="fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea" exitCode=0 Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.374748 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rlzjl" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.374731 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlzjl" event={"ID":"8d6184db-7543-4b75-a017-877fa23fdb22","Type":"ContainerDied","Data":"fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea"} Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.374874 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rlzjl" event={"ID":"8d6184db-7543-4b75-a017-877fa23fdb22","Type":"ContainerDied","Data":"c607b8a3474a3e0c4a21dc9f474e2c968bf5eaeb3e0f19916ab525d44935aa4e"} Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.374899 4637 scope.go:117] "RemoveContainer" containerID="fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.395769 4637 scope.go:117] "RemoveContainer" containerID="70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.428620 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rlzjl"] Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.437204 4637 scope.go:117] "RemoveContainer" containerID="683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.443097 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rlzjl"] Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.460106 4637 scope.go:117] "RemoveContainer" containerID="fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea" Dec 01 15:39:13 crc kubenswrapper[4637]: E1201 15:39:13.460897 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea\": container with ID starting with fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea not found: ID does not exist" containerID="fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.461010 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea"} err="failed to get container status \"fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea\": rpc error: code = NotFound desc = could not find container \"fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea\": container with ID starting with fe09771b58ab2d22ffe3d9a80ef54a79a58950ca9ece26c9dfc2427a153a08ea not found: ID does not exist" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.461049 4637 scope.go:117] "RemoveContainer" containerID="70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b" Dec 01 15:39:13 crc kubenswrapper[4637]: E1201 15:39:13.461399 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b\": container with ID starting with 70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b not found: ID does not exist" containerID="70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.461435 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b"} err="failed to get container status \"70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b\": rpc error: code = NotFound desc = could not find container \"70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b\": container with ID starting with 70d542e580dae35a9b1c79c818ee981331902448c838b4c2633b984aaa2f798b not found: ID does not exist" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.461462 4637 scope.go:117] "RemoveContainer" containerID="683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea" Dec 01 15:39:13 crc kubenswrapper[4637]: E1201 15:39:13.461692 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea\": container with ID starting with 683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea not found: ID does not exist" containerID="683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.461719 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea"} err="failed to get container status \"683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea\": rpc error: code = NotFound desc = could not find container \"683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea\": container with ID starting with 683485923bf1aecc7b436eec5420092c262ced3531ec4660b8d0c3f94f5d35ea not found: ID does not exist" Dec 01 15:39:13 crc kubenswrapper[4637]: I1201 15:39:13.807035 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" path="/var/lib/kubelet/pods/8d6184db-7543-4b75-a017-877fa23fdb22/volumes" Dec 01 15:39:14 crc kubenswrapper[4637]: I1201 15:39:14.434275 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:39:14 crc kubenswrapper[4637]: I1201 15:39:14.498227 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:39:15 crc kubenswrapper[4637]: I1201 15:39:15.463587 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdx99"] Dec 01 15:39:16 crc kubenswrapper[4637]: I1201 15:39:16.403715 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fdx99" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="registry-server" containerID="cri-o://d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a" gracePeriod=2 Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.015527 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.162264 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckfs5\" (UniqueName: \"kubernetes.io/projected/b527ae14-8c68-4a01-8716-d93a84e49280-kube-api-access-ckfs5\") pod \"b527ae14-8c68-4a01-8716-d93a84e49280\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.162489 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-utilities\") pod \"b527ae14-8c68-4a01-8716-d93a84e49280\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.162572 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-catalog-content\") pod \"b527ae14-8c68-4a01-8716-d93a84e49280\" (UID: \"b527ae14-8c68-4a01-8716-d93a84e49280\") " Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.163617 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-utilities" (OuterVolumeSpecName: "utilities") pod "b527ae14-8c68-4a01-8716-d93a84e49280" (UID: "b527ae14-8c68-4a01-8716-d93a84e49280"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.166857 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.176174 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b527ae14-8c68-4a01-8716-d93a84e49280-kube-api-access-ckfs5" (OuterVolumeSpecName: "kube-api-access-ckfs5") pod "b527ae14-8c68-4a01-8716-d93a84e49280" (UID: "b527ae14-8c68-4a01-8716-d93a84e49280"). InnerVolumeSpecName "kube-api-access-ckfs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.216109 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b527ae14-8c68-4a01-8716-d93a84e49280" (UID: "b527ae14-8c68-4a01-8716-d93a84e49280"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.268505 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b527ae14-8c68-4a01-8716-d93a84e49280-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.268540 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckfs5\" (UniqueName: \"kubernetes.io/projected/b527ae14-8c68-4a01-8716-d93a84e49280-kube-api-access-ckfs5\") on node \"crc\" DevicePath \"\"" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.418553 4637 generic.go:334] "Generic (PLEG): container finished" podID="b527ae14-8c68-4a01-8716-d93a84e49280" containerID="d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a" exitCode=0 Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.418671 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdx99" event={"ID":"b527ae14-8c68-4a01-8716-d93a84e49280","Type":"ContainerDied","Data":"d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a"} Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.418775 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdx99" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.419050 4637 scope.go:117] "RemoveContainer" containerID="d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.419016 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdx99" event={"ID":"b527ae14-8c68-4a01-8716-d93a84e49280","Type":"ContainerDied","Data":"7a1f071ad69825f828d8dc88e4cff04c5456c138a26c06292bceb05d7b6d6985"} Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.454448 4637 scope.go:117] "RemoveContainer" containerID="d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.478642 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdx99"] Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.483185 4637 scope.go:117] "RemoveContainer" containerID="fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.490463 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fdx99"] Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.543691 4637 scope.go:117] "RemoveContainer" containerID="d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a" Dec 01 15:39:17 crc kubenswrapper[4637]: E1201 15:39:17.547443 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a\": container with ID starting with d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a not found: ID does not exist" containerID="d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.547489 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a"} err="failed to get container status \"d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a\": rpc error: code = NotFound desc = could not find container \"d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a\": container with ID starting with d3c3b57d1b260cd3249332aea38def73b110adc6b938550913f1c23a677ea79a not found: ID does not exist" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.547515 4637 scope.go:117] "RemoveContainer" containerID="d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511" Dec 01 15:39:17 crc kubenswrapper[4637]: E1201 15:39:17.548444 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511\": container with ID starting with d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511 not found: ID does not exist" containerID="d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.548482 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511"} err="failed to get container status \"d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511\": rpc error: code = NotFound desc = could not find container \"d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511\": container with ID starting with d63f3faea7362cb53975ecc24f6980c41b7e2d2168596a559456709262b68511 not found: ID does not exist" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.548505 4637 scope.go:117] "RemoveContainer" containerID="fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32" Dec 01 15:39:17 crc kubenswrapper[4637]: E1201 15:39:17.549271 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32\": container with ID starting with fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32 not found: ID does not exist" containerID="fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.549379 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32"} err="failed to get container status \"fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32\": rpc error: code = NotFound desc = could not find container \"fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32\": container with ID starting with fad6cd3e60dff7671740f049d1812036bee7b8203d1beede00da547d39e66e32 not found: ID does not exist" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.778648 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:39:17 crc kubenswrapper[4637]: E1201 15:39:17.778895 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:39:17 crc kubenswrapper[4637]: I1201 15:39:17.786499 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" path="/var/lib/kubelet/pods/b527ae14-8c68-4a01-8716-d93a84e49280/volumes" Dec 01 15:39:30 crc kubenswrapper[4637]: I1201 15:39:30.771372 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:39:30 crc kubenswrapper[4637]: E1201 15:39:30.772279 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:39:44 crc kubenswrapper[4637]: I1201 15:39:44.771123 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:39:44 crc kubenswrapper[4637]: E1201 15:39:44.771747 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:39:58 crc kubenswrapper[4637]: I1201 15:39:58.771839 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:39:59 crc kubenswrapper[4637]: I1201 15:39:59.819168 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"c8c4bb69a29afc7b9830904f422f63a3f435cb5072c706ce648a57a585edeb24"} Dec 01 15:41:02 crc kubenswrapper[4637]: I1201 15:41:02.832221 4637 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-wbjjx" podUID="e99ec116-bc40-4275-b124-476b780bf9ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 15:42:15 crc kubenswrapper[4637]: I1201 15:42:15.614007 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:42:15 crc kubenswrapper[4637]: I1201 15:42:15.615643 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:42:45 crc kubenswrapper[4637]: I1201 15:42:45.613520 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:42:45 crc kubenswrapper[4637]: I1201 15:42:45.614150 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:43:15 crc kubenswrapper[4637]: I1201 15:43:15.614291 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:43:15 crc kubenswrapper[4637]: I1201 15:43:15.614818 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:43:15 crc kubenswrapper[4637]: I1201 15:43:15.614863 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:43:15 crc kubenswrapper[4637]: I1201 15:43:15.615684 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8c4bb69a29afc7b9830904f422f63a3f435cb5072c706ce648a57a585edeb24"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:43:15 crc kubenswrapper[4637]: I1201 15:43:15.615733 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://c8c4bb69a29afc7b9830904f422f63a3f435cb5072c706ce648a57a585edeb24" gracePeriod=600 Dec 01 15:43:16 crc kubenswrapper[4637]: I1201 15:43:16.674586 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="c8c4bb69a29afc7b9830904f422f63a3f435cb5072c706ce648a57a585edeb24" exitCode=0 Dec 01 15:43:16 crc kubenswrapper[4637]: I1201 15:43:16.674655 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"c8c4bb69a29afc7b9830904f422f63a3f435cb5072c706ce648a57a585edeb24"} Dec 01 15:43:16 crc kubenswrapper[4637]: I1201 15:43:16.675330 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8"} Dec 01 15:43:16 crc kubenswrapper[4637]: I1201 15:43:16.675368 4637 scope.go:117] "RemoveContainer" containerID="92482f861f5da290ef6f1c5dbc9ea0dfa404fe4d04db9deee97f5a335e5b6d62" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.168798 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5"] Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169604 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" containerName="extract-utilities" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169618 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" containerName="extract-utilities" Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169634 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" containerName="extract-utilities" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169640 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" containerName="extract-utilities" Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169649 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="extract-content" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169655 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="extract-content" Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169666 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169672 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169687 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="extract-utilities" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169693 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="extract-utilities" Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169708 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169713 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169721 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169726 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169742 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" containerName="extract-content" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169747 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" containerName="extract-content" Dec 01 15:45:00 crc kubenswrapper[4637]: E1201 15:45:00.169761 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" containerName="extract-content" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169766 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" containerName="extract-content" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169963 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6184db-7543-4b75-a017-877fa23fdb22" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169982 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="b527ae14-8c68-4a01-8716-d93a84e49280" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.169992 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="00910a03-c961-4720-addf-dfaa3945ad0a" containerName="registry-server" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.170590 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.175328 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.181231 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.193129 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5"] Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.278725 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defe9973-bf72-49f4-812e-c0b1babf8fcf-secret-volume\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.279116 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcwp2\" (UniqueName: \"kubernetes.io/projected/defe9973-bf72-49f4-812e-c0b1babf8fcf-kube-api-access-kcwp2\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.279200 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defe9973-bf72-49f4-812e-c0b1babf8fcf-config-volume\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.380751 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defe9973-bf72-49f4-812e-c0b1babf8fcf-secret-volume\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.381034 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcwp2\" (UniqueName: \"kubernetes.io/projected/defe9973-bf72-49f4-812e-c0b1babf8fcf-kube-api-access-kcwp2\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.381187 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defe9973-bf72-49f4-812e-c0b1babf8fcf-config-volume\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.382202 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defe9973-bf72-49f4-812e-c0b1babf8fcf-config-volume\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.395624 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defe9973-bf72-49f4-812e-c0b1babf8fcf-secret-volume\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.407439 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcwp2\" (UniqueName: \"kubernetes.io/projected/defe9973-bf72-49f4-812e-c0b1babf8fcf-kube-api-access-kcwp2\") pod \"collect-profiles-29410065-56mr5\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:00 crc kubenswrapper[4637]: I1201 15:45:00.502202 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:01 crc kubenswrapper[4637]: I1201 15:45:01.002835 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5"] Dec 01 15:45:01 crc kubenswrapper[4637]: I1201 15:45:01.674411 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" event={"ID":"defe9973-bf72-49f4-812e-c0b1babf8fcf","Type":"ContainerDied","Data":"9bc25b5fbb18d505c2ce00bfbfdf90ac05f868018acf0cf746fd81d2b00b03e3"} Dec 01 15:45:01 crc kubenswrapper[4637]: I1201 15:45:01.674275 4637 generic.go:334] "Generic (PLEG): container finished" podID="defe9973-bf72-49f4-812e-c0b1babf8fcf" containerID="9bc25b5fbb18d505c2ce00bfbfdf90ac05f868018acf0cf746fd81d2b00b03e3" exitCode=0 Dec 01 15:45:01 crc kubenswrapper[4637]: I1201 15:45:01.674852 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" event={"ID":"defe9973-bf72-49f4-812e-c0b1babf8fcf","Type":"ContainerStarted","Data":"8916004945cf86406c9bdd4910bdbd3fbab6014e952b17dabb7838d038cbc3cd"} Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.262756 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.344633 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defe9973-bf72-49f4-812e-c0b1babf8fcf-secret-volume\") pod \"defe9973-bf72-49f4-812e-c0b1babf8fcf\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.344701 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcwp2\" (UniqueName: \"kubernetes.io/projected/defe9973-bf72-49f4-812e-c0b1babf8fcf-kube-api-access-kcwp2\") pod \"defe9973-bf72-49f4-812e-c0b1babf8fcf\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.344869 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defe9973-bf72-49f4-812e-c0b1babf8fcf-config-volume\") pod \"defe9973-bf72-49f4-812e-c0b1babf8fcf\" (UID: \"defe9973-bf72-49f4-812e-c0b1babf8fcf\") " Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.346074 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/defe9973-bf72-49f4-812e-c0b1babf8fcf-config-volume" (OuterVolumeSpecName: "config-volume") pod "defe9973-bf72-49f4-812e-c0b1babf8fcf" (UID: "defe9973-bf72-49f4-812e-c0b1babf8fcf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.354100 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe9973-bf72-49f4-812e-c0b1babf8fcf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "defe9973-bf72-49f4-812e-c0b1babf8fcf" (UID: "defe9973-bf72-49f4-812e-c0b1babf8fcf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.354992 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defe9973-bf72-49f4-812e-c0b1babf8fcf-kube-api-access-kcwp2" (OuterVolumeSpecName: "kube-api-access-kcwp2") pod "defe9973-bf72-49f4-812e-c0b1babf8fcf" (UID: "defe9973-bf72-49f4-812e-c0b1babf8fcf"). InnerVolumeSpecName "kube-api-access-kcwp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.447831 4637 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defe9973-bf72-49f4-812e-c0b1babf8fcf-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.447882 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcwp2\" (UniqueName: \"kubernetes.io/projected/defe9973-bf72-49f4-812e-c0b1babf8fcf-kube-api-access-kcwp2\") on node \"crc\" DevicePath \"\"" Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.447891 4637 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defe9973-bf72-49f4-812e-c0b1babf8fcf-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.693571 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" event={"ID":"defe9973-bf72-49f4-812e-c0b1babf8fcf","Type":"ContainerDied","Data":"8916004945cf86406c9bdd4910bdbd3fbab6014e952b17dabb7838d038cbc3cd"} Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.693614 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8916004945cf86406c9bdd4910bdbd3fbab6014e952b17dabb7838d038cbc3cd" Dec 01 15:45:03 crc kubenswrapper[4637]: I1201 15:45:03.693957 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410065-56mr5" Dec 01 15:45:04 crc kubenswrapper[4637]: I1201 15:45:04.345085 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw"] Dec 01 15:45:04 crc kubenswrapper[4637]: I1201 15:45:04.354592 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410020-kd9hw"] Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.080443 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5m67w"] Dec 01 15:45:05 crc kubenswrapper[4637]: E1201 15:45:05.080859 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defe9973-bf72-49f4-812e-c0b1babf8fcf" containerName="collect-profiles" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.080880 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="defe9973-bf72-49f4-812e-c0b1babf8fcf" containerName="collect-profiles" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.081136 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="defe9973-bf72-49f4-812e-c0b1babf8fcf" containerName="collect-profiles" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.087577 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.096118 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5m67w"] Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.183015 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bj45\" (UniqueName: \"kubernetes.io/projected/262390bc-6c20-4cb6-abfb-91062fe58e5b-kube-api-access-6bj45\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.183103 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-catalog-content\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.183140 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-utilities\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.283786 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bj45\" (UniqueName: \"kubernetes.io/projected/262390bc-6c20-4cb6-abfb-91062fe58e5b-kube-api-access-6bj45\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.283850 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-catalog-content\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.283877 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-utilities\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.284379 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-utilities\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.284457 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-catalog-content\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.312692 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bj45\" (UniqueName: \"kubernetes.io/projected/262390bc-6c20-4cb6-abfb-91062fe58e5b-kube-api-access-6bj45\") pod \"redhat-operators-5m67w\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.455542 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:05 crc kubenswrapper[4637]: I1201 15:45:05.786034 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b6a7cd-467f-437c-9fd5-b840f2aa6504" path="/var/lib/kubelet/pods/e8b6a7cd-467f-437c-9fd5-b840f2aa6504/volumes" Dec 01 15:45:06 crc kubenswrapper[4637]: I1201 15:45:06.014318 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5m67w"] Dec 01 15:45:06 crc kubenswrapper[4637]: I1201 15:45:06.725159 4637 generic.go:334] "Generic (PLEG): container finished" podID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerID="6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3" exitCode=0 Dec 01 15:45:06 crc kubenswrapper[4637]: I1201 15:45:06.725378 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5m67w" event={"ID":"262390bc-6c20-4cb6-abfb-91062fe58e5b","Type":"ContainerDied","Data":"6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3"} Dec 01 15:45:06 crc kubenswrapper[4637]: I1201 15:45:06.725504 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5m67w" event={"ID":"262390bc-6c20-4cb6-abfb-91062fe58e5b","Type":"ContainerStarted","Data":"217d20e711ae86b1f0f16c25ca2dae563cfc60a4132835242b1a253f9b943a4b"} Dec 01 15:45:06 crc kubenswrapper[4637]: I1201 15:45:06.727516 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:45:08 crc kubenswrapper[4637]: I1201 15:45:08.751001 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5m67w" event={"ID":"262390bc-6c20-4cb6-abfb-91062fe58e5b","Type":"ContainerStarted","Data":"d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb"} Dec 01 15:45:10 crc kubenswrapper[4637]: I1201 15:45:10.774518 4637 generic.go:334] "Generic (PLEG): container finished" podID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerID="d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb" exitCode=0 Dec 01 15:45:10 crc kubenswrapper[4637]: I1201 15:45:10.774566 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5m67w" event={"ID":"262390bc-6c20-4cb6-abfb-91062fe58e5b","Type":"ContainerDied","Data":"d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb"} Dec 01 15:45:11 crc kubenswrapper[4637]: I1201 15:45:11.790590 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5m67w" event={"ID":"262390bc-6c20-4cb6-abfb-91062fe58e5b","Type":"ContainerStarted","Data":"5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed"} Dec 01 15:45:11 crc kubenswrapper[4637]: I1201 15:45:11.821667 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5m67w" podStartSLOduration=2.116247993 podStartE2EDuration="6.821633954s" podCreationTimestamp="2025-12-01 15:45:05 +0000 UTC" firstStartedPulling="2025-12-01 15:45:06.72726837 +0000 UTC m=+3557.244977198" lastFinishedPulling="2025-12-01 15:45:11.432654331 +0000 UTC m=+3561.950363159" observedRunningTime="2025-12-01 15:45:11.812339513 +0000 UTC m=+3562.330048341" watchObservedRunningTime="2025-12-01 15:45:11.821633954 +0000 UTC m=+3562.339342782" Dec 01 15:45:15 crc kubenswrapper[4637]: I1201 15:45:15.455710 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:15 crc kubenswrapper[4637]: I1201 15:45:15.456264 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:15 crc kubenswrapper[4637]: I1201 15:45:15.613852 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:45:15 crc kubenswrapper[4637]: I1201 15:45:15.613972 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:45:16 crc kubenswrapper[4637]: I1201 15:45:16.521243 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5m67w" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="registry-server" probeResult="failure" output=< Dec 01 15:45:16 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:45:16 crc kubenswrapper[4637]: > Dec 01 15:45:25 crc kubenswrapper[4637]: I1201 15:45:25.520894 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:25 crc kubenswrapper[4637]: I1201 15:45:25.609696 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:28 crc kubenswrapper[4637]: I1201 15:45:28.140770 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5m67w"] Dec 01 15:45:28 crc kubenswrapper[4637]: I1201 15:45:28.141380 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5m67w" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="registry-server" containerID="cri-o://5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed" gracePeriod=2 Dec 01 15:45:28 crc kubenswrapper[4637]: I1201 15:45:28.866130 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:28 crc kubenswrapper[4637]: I1201 15:45:28.966485 4637 generic.go:334] "Generic (PLEG): container finished" podID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerID="5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed" exitCode=0 Dec 01 15:45:28 crc kubenswrapper[4637]: I1201 15:45:28.966543 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5m67w" event={"ID":"262390bc-6c20-4cb6-abfb-91062fe58e5b","Type":"ContainerDied","Data":"5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed"} Dec 01 15:45:28 crc kubenswrapper[4637]: I1201 15:45:28.966579 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5m67w" event={"ID":"262390bc-6c20-4cb6-abfb-91062fe58e5b","Type":"ContainerDied","Data":"217d20e711ae86b1f0f16c25ca2dae563cfc60a4132835242b1a253f9b943a4b"} Dec 01 15:45:28 crc kubenswrapper[4637]: I1201 15:45:28.966614 4637 scope.go:117] "RemoveContainer" containerID="5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed" Dec 01 15:45:28 crc kubenswrapper[4637]: I1201 15:45:28.966653 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5m67w" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.000515 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-catalog-content\") pod \"262390bc-6c20-4cb6-abfb-91062fe58e5b\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.000565 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bj45\" (UniqueName: \"kubernetes.io/projected/262390bc-6c20-4cb6-abfb-91062fe58e5b-kube-api-access-6bj45\") pod \"262390bc-6c20-4cb6-abfb-91062fe58e5b\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.000599 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-utilities\") pod \"262390bc-6c20-4cb6-abfb-91062fe58e5b\" (UID: \"262390bc-6c20-4cb6-abfb-91062fe58e5b\") " Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.002031 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-utilities" (OuterVolumeSpecName: "utilities") pod "262390bc-6c20-4cb6-abfb-91062fe58e5b" (UID: "262390bc-6c20-4cb6-abfb-91062fe58e5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.023894 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262390bc-6c20-4cb6-abfb-91062fe58e5b-kube-api-access-6bj45" (OuterVolumeSpecName: "kube-api-access-6bj45") pod "262390bc-6c20-4cb6-abfb-91062fe58e5b" (UID: "262390bc-6c20-4cb6-abfb-91062fe58e5b"). InnerVolumeSpecName "kube-api-access-6bj45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.059010 4637 scope.go:117] "RemoveContainer" containerID="d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.107515 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bj45\" (UniqueName: \"kubernetes.io/projected/262390bc-6c20-4cb6-abfb-91062fe58e5b-kube-api-access-6bj45\") on node \"crc\" DevicePath \"\"" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.107560 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.118237 4637 scope.go:117] "RemoveContainer" containerID="6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.145627 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "262390bc-6c20-4cb6-abfb-91062fe58e5b" (UID: "262390bc-6c20-4cb6-abfb-91062fe58e5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.158154 4637 scope.go:117] "RemoveContainer" containerID="5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed" Dec 01 15:45:29 crc kubenswrapper[4637]: E1201 15:45:29.158780 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed\": container with ID starting with 5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed not found: ID does not exist" containerID="5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.158847 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed"} err="failed to get container status \"5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed\": rpc error: code = NotFound desc = could not find container \"5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed\": container with ID starting with 5af09493d979ef471739515a4b0404248543691eb49a1f68508c9838ad2299ed not found: ID does not exist" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.158885 4637 scope.go:117] "RemoveContainer" containerID="d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb" Dec 01 15:45:29 crc kubenswrapper[4637]: E1201 15:45:29.159393 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb\": container with ID starting with d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb not found: ID does not exist" containerID="d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.159431 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb"} err="failed to get container status \"d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb\": rpc error: code = NotFound desc = could not find container \"d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb\": container with ID starting with d8eca997e36fa5487f9de6915c3111847ecb9ef8c87ff2c1a328c5b4ba2851fb not found: ID does not exist" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.159453 4637 scope.go:117] "RemoveContainer" containerID="6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3" Dec 01 15:45:29 crc kubenswrapper[4637]: E1201 15:45:29.159814 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3\": container with ID starting with 6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3 not found: ID does not exist" containerID="6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.159850 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3"} err="failed to get container status \"6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3\": rpc error: code = NotFound desc = could not find container \"6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3\": container with ID starting with 6d931bd463a5290f42c369ce8e884f751858662c9eb1af5121c2e625669701a3 not found: ID does not exist" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.209133 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262390bc-6c20-4cb6-abfb-91062fe58e5b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.333620 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5m67w"] Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.345469 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5m67w"] Dec 01 15:45:29 crc kubenswrapper[4637]: I1201 15:45:29.785396 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" path="/var/lib/kubelet/pods/262390bc-6c20-4cb6-abfb-91062fe58e5b/volumes" Dec 01 15:45:34 crc kubenswrapper[4637]: I1201 15:45:34.918762 4637 scope.go:117] "RemoveContainer" containerID="44e7fa27302de930c3554e7e2ca2cc02d72e060d59da38b07def371b14dfe028" Dec 01 15:45:45 crc kubenswrapper[4637]: I1201 15:45:45.613727 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:45:45 crc kubenswrapper[4637]: I1201 15:45:45.614607 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:46:15 crc kubenswrapper[4637]: I1201 15:46:15.614235 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:46:15 crc kubenswrapper[4637]: I1201 15:46:15.615214 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:46:15 crc kubenswrapper[4637]: I1201 15:46:15.615291 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:46:15 crc kubenswrapper[4637]: I1201 15:46:15.616465 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:46:15 crc kubenswrapper[4637]: I1201 15:46:15.616545 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" gracePeriod=600 Dec 01 15:46:15 crc kubenswrapper[4637]: E1201 15:46:15.744421 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:46:16 crc kubenswrapper[4637]: I1201 15:46:16.422717 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" exitCode=0 Dec 01 15:46:16 crc kubenswrapper[4637]: I1201 15:46:16.422772 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8"} Dec 01 15:46:16 crc kubenswrapper[4637]: I1201 15:46:16.422839 4637 scope.go:117] "RemoveContainer" containerID="c8c4bb69a29afc7b9830904f422f63a3f435cb5072c706ce648a57a585edeb24" Dec 01 15:46:16 crc kubenswrapper[4637]: I1201 15:46:16.423878 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:46:16 crc kubenswrapper[4637]: E1201 15:46:16.424407 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:46:30 crc kubenswrapper[4637]: I1201 15:46:30.771212 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:46:30 crc kubenswrapper[4637]: E1201 15:46:30.771869 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:46:44 crc kubenswrapper[4637]: I1201 15:46:44.771747 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:46:44 crc kubenswrapper[4637]: E1201 15:46:44.772400 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:46:55 crc kubenswrapper[4637]: I1201 15:46:55.772696 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:46:55 crc kubenswrapper[4637]: E1201 15:46:55.773669 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:47:08 crc kubenswrapper[4637]: I1201 15:47:08.771556 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:47:08 crc kubenswrapper[4637]: E1201 15:47:08.772409 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:47:21 crc kubenswrapper[4637]: I1201 15:47:21.772801 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:47:21 crc kubenswrapper[4637]: E1201 15:47:21.773662 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:47:34 crc kubenswrapper[4637]: I1201 15:47:34.772757 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:47:34 crc kubenswrapper[4637]: E1201 15:47:34.773701 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:47:49 crc kubenswrapper[4637]: I1201 15:47:49.775167 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:47:49 crc kubenswrapper[4637]: E1201 15:47:49.775771 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:48:01 crc kubenswrapper[4637]: I1201 15:48:01.772024 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:48:01 crc kubenswrapper[4637]: E1201 15:48:01.773744 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:48:12 crc kubenswrapper[4637]: I1201 15:48:12.771910 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:48:12 crc kubenswrapper[4637]: E1201 15:48:12.772757 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:48:23 crc kubenswrapper[4637]: I1201 15:48:23.771589 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:48:23 crc kubenswrapper[4637]: E1201 15:48:23.772333 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:48:38 crc kubenswrapper[4637]: I1201 15:48:38.771540 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:48:38 crc kubenswrapper[4637]: E1201 15:48:38.772202 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:48:52 crc kubenswrapper[4637]: I1201 15:48:52.772245 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:48:52 crc kubenswrapper[4637]: E1201 15:48:52.774289 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:49:06 crc kubenswrapper[4637]: I1201 15:49:06.772222 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:49:06 crc kubenswrapper[4637]: E1201 15:49:06.775916 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:49:21 crc kubenswrapper[4637]: I1201 15:49:21.772722 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:49:21 crc kubenswrapper[4637]: E1201 15:49:21.773787 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.543649 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vqfsg"] Dec 01 15:49:34 crc kubenswrapper[4637]: E1201 15:49:34.544531 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="registry-server" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.544544 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="registry-server" Dec 01 15:49:34 crc kubenswrapper[4637]: E1201 15:49:34.544557 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="extract-content" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.544563 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="extract-content" Dec 01 15:49:34 crc kubenswrapper[4637]: E1201 15:49:34.544578 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="extract-utilities" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.544584 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="extract-utilities" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.544783 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="262390bc-6c20-4cb6-abfb-91062fe58e5b" containerName="registry-server" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.546106 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.572366 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqfsg"] Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.681279 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-catalog-content\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.681345 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-utilities\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.681433 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkwv6\" (UniqueName: \"kubernetes.io/projected/2e7252ab-6651-4c9d-afef-050eb7e97fa8-kube-api-access-lkwv6\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.783104 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkwv6\" (UniqueName: \"kubernetes.io/projected/2e7252ab-6651-4c9d-afef-050eb7e97fa8-kube-api-access-lkwv6\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.783483 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-catalog-content\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.783603 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-utilities\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.783959 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-catalog-content\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.784221 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-utilities\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.811082 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkwv6\" (UniqueName: \"kubernetes.io/projected/2e7252ab-6651-4c9d-afef-050eb7e97fa8-kube-api-access-lkwv6\") pod \"certified-operators-vqfsg\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:34 crc kubenswrapper[4637]: I1201 15:49:34.873824 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:35 crc kubenswrapper[4637]: I1201 15:49:35.651652 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqfsg"] Dec 01 15:49:36 crc kubenswrapper[4637]: I1201 15:49:36.228146 4637 generic.go:334] "Generic (PLEG): container finished" podID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerID="89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d" exitCode=0 Dec 01 15:49:36 crc kubenswrapper[4637]: I1201 15:49:36.228249 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqfsg" event={"ID":"2e7252ab-6651-4c9d-afef-050eb7e97fa8","Type":"ContainerDied","Data":"89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d"} Dec 01 15:49:36 crc kubenswrapper[4637]: I1201 15:49:36.228399 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqfsg" event={"ID":"2e7252ab-6651-4c9d-afef-050eb7e97fa8","Type":"ContainerStarted","Data":"e5bcf916de6cf5a7ce6a9511678ff5fab4891f41710f67c5879282e6830fd246"} Dec 01 15:49:36 crc kubenswrapper[4637]: I1201 15:49:36.771327 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:49:36 crc kubenswrapper[4637]: E1201 15:49:36.772103 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:49:37 crc kubenswrapper[4637]: I1201 15:49:37.239513 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqfsg" event={"ID":"2e7252ab-6651-4c9d-afef-050eb7e97fa8","Type":"ContainerStarted","Data":"e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257"} Dec 01 15:49:38 crc kubenswrapper[4637]: I1201 15:49:38.249235 4637 generic.go:334] "Generic (PLEG): container finished" podID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerID="e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257" exitCode=0 Dec 01 15:49:38 crc kubenswrapper[4637]: I1201 15:49:38.250181 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqfsg" event={"ID":"2e7252ab-6651-4c9d-afef-050eb7e97fa8","Type":"ContainerDied","Data":"e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257"} Dec 01 15:49:39 crc kubenswrapper[4637]: I1201 15:49:39.259855 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqfsg" event={"ID":"2e7252ab-6651-4c9d-afef-050eb7e97fa8","Type":"ContainerStarted","Data":"d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22"} Dec 01 15:49:39 crc kubenswrapper[4637]: I1201 15:49:39.283364 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vqfsg" podStartSLOduration=2.750912757 podStartE2EDuration="5.283346996s" podCreationTimestamp="2025-12-01 15:49:34 +0000 UTC" firstStartedPulling="2025-12-01 15:49:36.229883298 +0000 UTC m=+3826.747592126" lastFinishedPulling="2025-12-01 15:49:38.762317537 +0000 UTC m=+3829.280026365" observedRunningTime="2025-12-01 15:49:39.276422108 +0000 UTC m=+3829.794130936" watchObservedRunningTime="2025-12-01 15:49:39.283346996 +0000 UTC m=+3829.801055824" Dec 01 15:49:44 crc kubenswrapper[4637]: I1201 15:49:44.874342 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:44 crc kubenswrapper[4637]: I1201 15:49:44.875281 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:44 crc kubenswrapper[4637]: I1201 15:49:44.936914 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:45 crc kubenswrapper[4637]: I1201 15:49:45.363678 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:45 crc kubenswrapper[4637]: I1201 15:49:45.429438 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqfsg"] Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.331452 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vqfsg" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerName="registry-server" containerID="cri-o://d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22" gracePeriod=2 Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.607660 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vtfrd"] Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.610032 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.630858 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtfrd"] Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.786157 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-utilities\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.786303 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-catalog-content\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.786368 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29k7\" (UniqueName: \"kubernetes.io/projected/9dc92848-81ca-42ad-a89d-8235cd2e330d-kube-api-access-m29k7\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.887731 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-catalog-content\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.887851 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29k7\" (UniqueName: \"kubernetes.io/projected/9dc92848-81ca-42ad-a89d-8235cd2e330d-kube-api-access-m29k7\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.887950 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-utilities\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.889468 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-utilities\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.889535 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-catalog-content\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.890759 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.971993 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29k7\" (UniqueName: \"kubernetes.io/projected/9dc92848-81ca-42ad-a89d-8235cd2e330d-kube-api-access-m29k7\") pod \"redhat-marketplace-vtfrd\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.989806 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkwv6\" (UniqueName: \"kubernetes.io/projected/2e7252ab-6651-4c9d-afef-050eb7e97fa8-kube-api-access-lkwv6\") pod \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.989893 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-catalog-content\") pod \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.990005 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-utilities\") pod \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\" (UID: \"2e7252ab-6651-4c9d-afef-050eb7e97fa8\") " Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.991402 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-utilities" (OuterVolumeSpecName: "utilities") pod "2e7252ab-6651-4c9d-afef-050eb7e97fa8" (UID: "2e7252ab-6651-4c9d-afef-050eb7e97fa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:49:47 crc kubenswrapper[4637]: I1201 15:49:47.995899 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7252ab-6651-4c9d-afef-050eb7e97fa8-kube-api-access-lkwv6" (OuterVolumeSpecName: "kube-api-access-lkwv6") pod "2e7252ab-6651-4c9d-afef-050eb7e97fa8" (UID: "2e7252ab-6651-4c9d-afef-050eb7e97fa8"). InnerVolumeSpecName "kube-api-access-lkwv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.092086 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkwv6\" (UniqueName: \"kubernetes.io/projected/2e7252ab-6651-4c9d-afef-050eb7e97fa8-kube-api-access-lkwv6\") on node \"crc\" DevicePath \"\"" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.092122 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.096702 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e7252ab-6651-4c9d-afef-050eb7e97fa8" (UID: "2e7252ab-6651-4c9d-afef-050eb7e97fa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.193858 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7252ab-6651-4c9d-afef-050eb7e97fa8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.244274 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.349445 4637 generic.go:334] "Generic (PLEG): container finished" podID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerID="d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22" exitCode=0 Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.349500 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqfsg" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.349505 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqfsg" event={"ID":"2e7252ab-6651-4c9d-afef-050eb7e97fa8","Type":"ContainerDied","Data":"d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22"} Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.349545 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqfsg" event={"ID":"2e7252ab-6651-4c9d-afef-050eb7e97fa8","Type":"ContainerDied","Data":"e5bcf916de6cf5a7ce6a9511678ff5fab4891f41710f67c5879282e6830fd246"} Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.349569 4637 scope.go:117] "RemoveContainer" containerID="d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.412794 4637 scope.go:117] "RemoveContainer" containerID="e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.424901 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqfsg"] Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.440859 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vqfsg"] Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.447299 4637 scope.go:117] "RemoveContainer" containerID="89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.533416 4637 scope.go:117] "RemoveContainer" containerID="d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22" Dec 01 15:49:48 crc kubenswrapper[4637]: E1201 15:49:48.534005 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22\": container with ID starting with d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22 not found: ID does not exist" containerID="d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.534052 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22"} err="failed to get container status \"d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22\": rpc error: code = NotFound desc = could not find container \"d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22\": container with ID starting with d6257d46d81449fc6c2c0de36923ce2059c3996194bf176804d46e247cff9a22 not found: ID does not exist" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.534077 4637 scope.go:117] "RemoveContainer" containerID="e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257" Dec 01 15:49:48 crc kubenswrapper[4637]: E1201 15:49:48.534455 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257\": container with ID starting with e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257 not found: ID does not exist" containerID="e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.534484 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257"} err="failed to get container status \"e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257\": rpc error: code = NotFound desc = could not find container \"e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257\": container with ID starting with e6e9eca6b95362f7a4e4b15a2b4a0302d90907f51c58d33b7efac2f30646d257 not found: ID does not exist" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.534505 4637 scope.go:117] "RemoveContainer" containerID="89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d" Dec 01 15:49:48 crc kubenswrapper[4637]: E1201 15:49:48.534875 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d\": container with ID starting with 89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d not found: ID does not exist" containerID="89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.534893 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d"} err="failed to get container status \"89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d\": rpc error: code = NotFound desc = could not find container \"89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d\": container with ID starting with 89b04b62b4060b1c73fbe1920832857ffb099e54a4171dde68f0915decc8a41d not found: ID does not exist" Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.744895 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtfrd"] Dec 01 15:49:48 crc kubenswrapper[4637]: I1201 15:49:48.783046 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:49:48 crc kubenswrapper[4637]: E1201 15:49:48.784208 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:49:49 crc kubenswrapper[4637]: I1201 15:49:49.361652 4637 generic.go:334] "Generic (PLEG): container finished" podID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerID="fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9" exitCode=0 Dec 01 15:49:49 crc kubenswrapper[4637]: I1201 15:49:49.361694 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtfrd" event={"ID":"9dc92848-81ca-42ad-a89d-8235cd2e330d","Type":"ContainerDied","Data":"fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9"} Dec 01 15:49:49 crc kubenswrapper[4637]: I1201 15:49:49.361715 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtfrd" event={"ID":"9dc92848-81ca-42ad-a89d-8235cd2e330d","Type":"ContainerStarted","Data":"a31bffc723c0c6065c728ec5b2fde4b23c12e3e247988fc377a75e099a67c502"} Dec 01 15:49:49 crc kubenswrapper[4637]: I1201 15:49:49.794790 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" path="/var/lib/kubelet/pods/2e7252ab-6651-4c9d-afef-050eb7e97fa8/volumes" Dec 01 15:49:51 crc kubenswrapper[4637]: I1201 15:49:51.383608 4637 generic.go:334] "Generic (PLEG): container finished" podID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerID="0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23" exitCode=0 Dec 01 15:49:51 crc kubenswrapper[4637]: I1201 15:49:51.384147 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtfrd" event={"ID":"9dc92848-81ca-42ad-a89d-8235cd2e330d","Type":"ContainerDied","Data":"0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23"} Dec 01 15:49:52 crc kubenswrapper[4637]: I1201 15:49:52.417449 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtfrd" event={"ID":"9dc92848-81ca-42ad-a89d-8235cd2e330d","Type":"ContainerStarted","Data":"e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063"} Dec 01 15:49:52 crc kubenswrapper[4637]: I1201 15:49:52.440047 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vtfrd" podStartSLOduration=2.766000418 podStartE2EDuration="5.440028788s" podCreationTimestamp="2025-12-01 15:49:47 +0000 UTC" firstStartedPulling="2025-12-01 15:49:49.36361168 +0000 UTC m=+3839.881320498" lastFinishedPulling="2025-12-01 15:49:52.03764004 +0000 UTC m=+3842.555348868" observedRunningTime="2025-12-01 15:49:52.437315764 +0000 UTC m=+3842.955024592" watchObservedRunningTime="2025-12-01 15:49:52.440028788 +0000 UTC m=+3842.957737616" Dec 01 15:49:58 crc kubenswrapper[4637]: I1201 15:49:58.246278 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:58 crc kubenswrapper[4637]: I1201 15:49:58.249712 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:58 crc kubenswrapper[4637]: I1201 15:49:58.316819 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:58 crc kubenswrapper[4637]: I1201 15:49:58.530909 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:49:58 crc kubenswrapper[4637]: I1201 15:49:58.602560 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtfrd"] Dec 01 15:50:00 crc kubenswrapper[4637]: I1201 15:50:00.504845 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vtfrd" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerName="registry-server" containerID="cri-o://e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063" gracePeriod=2 Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.102515 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.184070 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-catalog-content\") pod \"9dc92848-81ca-42ad-a89d-8235cd2e330d\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.184262 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29k7\" (UniqueName: \"kubernetes.io/projected/9dc92848-81ca-42ad-a89d-8235cd2e330d-kube-api-access-m29k7\") pod \"9dc92848-81ca-42ad-a89d-8235cd2e330d\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.185423 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-utilities\") pod \"9dc92848-81ca-42ad-a89d-8235cd2e330d\" (UID: \"9dc92848-81ca-42ad-a89d-8235cd2e330d\") " Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.187821 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-utilities" (OuterVolumeSpecName: "utilities") pod "9dc92848-81ca-42ad-a89d-8235cd2e330d" (UID: "9dc92848-81ca-42ad-a89d-8235cd2e330d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.202517 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc92848-81ca-42ad-a89d-8235cd2e330d-kube-api-access-m29k7" (OuterVolumeSpecName: "kube-api-access-m29k7") pod "9dc92848-81ca-42ad-a89d-8235cd2e330d" (UID: "9dc92848-81ca-42ad-a89d-8235cd2e330d"). InnerVolumeSpecName "kube-api-access-m29k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.203885 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dc92848-81ca-42ad-a89d-8235cd2e330d" (UID: "9dc92848-81ca-42ad-a89d-8235cd2e330d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.288251 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.288544 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m29k7\" (UniqueName: \"kubernetes.io/projected/9dc92848-81ca-42ad-a89d-8235cd2e330d-kube-api-access-m29k7\") on node \"crc\" DevicePath \"\"" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.288612 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc92848-81ca-42ad-a89d-8235cd2e330d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.518332 4637 generic.go:334] "Generic (PLEG): container finished" podID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerID="e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063" exitCode=0 Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.518426 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtfrd" event={"ID":"9dc92848-81ca-42ad-a89d-8235cd2e330d","Type":"ContainerDied","Data":"e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063"} Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.518514 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtfrd" event={"ID":"9dc92848-81ca-42ad-a89d-8235cd2e330d","Type":"ContainerDied","Data":"a31bffc723c0c6065c728ec5b2fde4b23c12e3e247988fc377a75e099a67c502"} Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.518543 4637 scope.go:117] "RemoveContainer" containerID="e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.524572 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtfrd" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.586396 4637 scope.go:117] "RemoveContainer" containerID="0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.615303 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtfrd"] Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.629501 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtfrd"] Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.639665 4637 scope.go:117] "RemoveContainer" containerID="fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.666177 4637 scope.go:117] "RemoveContainer" containerID="e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063" Dec 01 15:50:01 crc kubenswrapper[4637]: E1201 15:50:01.666842 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063\": container with ID starting with e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063 not found: ID does not exist" containerID="e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.666880 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063"} err="failed to get container status \"e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063\": rpc error: code = NotFound desc = could not find container \"e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063\": container with ID starting with e45866b77383f996b939a1d135962753321393d8b5436aa9fbf4acccedbb3063 not found: ID does not exist" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.666906 4637 scope.go:117] "RemoveContainer" containerID="0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23" Dec 01 15:50:01 crc kubenswrapper[4637]: E1201 15:50:01.667370 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23\": container with ID starting with 0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23 not found: ID does not exist" containerID="0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.667393 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23"} err="failed to get container status \"0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23\": rpc error: code = NotFound desc = could not find container \"0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23\": container with ID starting with 0e881eb10ab04c1f82bc72964bfe292e5dcf276a0ead05e1f44b0ed961529a23 not found: ID does not exist" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.667410 4637 scope.go:117] "RemoveContainer" containerID="fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9" Dec 01 15:50:01 crc kubenswrapper[4637]: E1201 15:50:01.667795 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9\": container with ID starting with fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9 not found: ID does not exist" containerID="fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.667814 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9"} err="failed to get container status \"fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9\": rpc error: code = NotFound desc = could not find container \"fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9\": container with ID starting with fce69bed73c36179139c6e3772f0d08c9dc3361a663442b0e92b46636e8399f9 not found: ID does not exist" Dec 01 15:50:01 crc kubenswrapper[4637]: I1201 15:50:01.782508 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" path="/var/lib/kubelet/pods/9dc92848-81ca-42ad-a89d-8235cd2e330d/volumes" Dec 01 15:50:02 crc kubenswrapper[4637]: I1201 15:50:02.771450 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:50:02 crc kubenswrapper[4637]: E1201 15:50:02.771989 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:50:14 crc kubenswrapper[4637]: I1201 15:50:14.773404 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:50:14 crc kubenswrapper[4637]: E1201 15:50:14.774078 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:50:29 crc kubenswrapper[4637]: I1201 15:50:29.779309 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:50:29 crc kubenswrapper[4637]: E1201 15:50:29.780207 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:50:44 crc kubenswrapper[4637]: I1201 15:50:44.773018 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:50:44 crc kubenswrapper[4637]: E1201 15:50:44.774026 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:50:55 crc kubenswrapper[4637]: I1201 15:50:55.771996 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:50:55 crc kubenswrapper[4637]: E1201 15:50:55.772980 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:51:09 crc kubenswrapper[4637]: I1201 15:51:09.781221 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:51:09 crc kubenswrapper[4637]: E1201 15:51:09.781877 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:51:23 crc kubenswrapper[4637]: I1201 15:51:23.771719 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:51:24 crc kubenswrapper[4637]: I1201 15:51:24.274248 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"a059050bdec537d7c8b69c1af325a13da15dcaef53005a3b465e81ccb049dfeb"} Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.447829 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sxxfd"] Dec 01 15:52:16 crc kubenswrapper[4637]: E1201 15:52:16.449037 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerName="extract-content" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.449055 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerName="extract-content" Dec 01 15:52:16 crc kubenswrapper[4637]: E1201 15:52:16.449073 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerName="registry-server" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.449080 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerName="registry-server" Dec 01 15:52:16 crc kubenswrapper[4637]: E1201 15:52:16.449103 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerName="extract-utilities" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.449113 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerName="extract-utilities" Dec 01 15:52:16 crc kubenswrapper[4637]: E1201 15:52:16.449131 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerName="extract-content" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.449138 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerName="extract-content" Dec 01 15:52:16 crc kubenswrapper[4637]: E1201 15:52:16.449161 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerName="registry-server" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.449168 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerName="registry-server" Dec 01 15:52:16 crc kubenswrapper[4637]: E1201 15:52:16.449179 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerName="extract-utilities" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.449186 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerName="extract-utilities" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.449446 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc92848-81ca-42ad-a89d-8235cd2e330d" containerName="registry-server" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.449462 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7252ab-6651-4c9d-afef-050eb7e97fa8" containerName="registry-server" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.451246 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.462577 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxxfd"] Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.603297 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-catalog-content\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.603954 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-944wv\" (UniqueName: \"kubernetes.io/projected/bcb5ee75-beee-4755-8c3c-0fd26b096108-kube-api-access-944wv\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.604044 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-utilities\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.706453 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-944wv\" (UniqueName: \"kubernetes.io/projected/bcb5ee75-beee-4755-8c3c-0fd26b096108-kube-api-access-944wv\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.706503 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-utilities\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.706579 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-catalog-content\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.707191 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-catalog-content\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.707386 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-utilities\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.728256 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-944wv\" (UniqueName: \"kubernetes.io/projected/bcb5ee75-beee-4755-8c3c-0fd26b096108-kube-api-access-944wv\") pod \"community-operators-sxxfd\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:16 crc kubenswrapper[4637]: I1201 15:52:16.778989 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:17 crc kubenswrapper[4637]: I1201 15:52:17.576436 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxxfd"] Dec 01 15:52:17 crc kubenswrapper[4637]: I1201 15:52:17.787866 4637 generic.go:334] "Generic (PLEG): container finished" podID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerID="9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4" exitCode=0 Dec 01 15:52:17 crc kubenswrapper[4637]: I1201 15:52:17.788154 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxxfd" event={"ID":"bcb5ee75-beee-4755-8c3c-0fd26b096108","Type":"ContainerDied","Data":"9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4"} Dec 01 15:52:17 crc kubenswrapper[4637]: I1201 15:52:17.788307 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxxfd" event={"ID":"bcb5ee75-beee-4755-8c3c-0fd26b096108","Type":"ContainerStarted","Data":"269d75e0832676e5b45eb9bf093b358d70488c3dbc4f7e94f46272a03276cf06"} Dec 01 15:52:17 crc kubenswrapper[4637]: I1201 15:52:17.791282 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 15:52:18 crc kubenswrapper[4637]: I1201 15:52:18.803116 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxxfd" event={"ID":"bcb5ee75-beee-4755-8c3c-0fd26b096108","Type":"ContainerStarted","Data":"70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25"} Dec 01 15:52:19 crc kubenswrapper[4637]: I1201 15:52:19.821854 4637 generic.go:334] "Generic (PLEG): container finished" podID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerID="70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25" exitCode=0 Dec 01 15:52:19 crc kubenswrapper[4637]: I1201 15:52:19.822050 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxxfd" event={"ID":"bcb5ee75-beee-4755-8c3c-0fd26b096108","Type":"ContainerDied","Data":"70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25"} Dec 01 15:52:21 crc kubenswrapper[4637]: I1201 15:52:21.858948 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxxfd" event={"ID":"bcb5ee75-beee-4755-8c3c-0fd26b096108","Type":"ContainerStarted","Data":"f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac"} Dec 01 15:52:21 crc kubenswrapper[4637]: I1201 15:52:21.898109 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sxxfd" podStartSLOduration=3.037780549 podStartE2EDuration="5.898084288s" podCreationTimestamp="2025-12-01 15:52:16 +0000 UTC" firstStartedPulling="2025-12-01 15:52:17.789826768 +0000 UTC m=+3988.307535596" lastFinishedPulling="2025-12-01 15:52:20.650130507 +0000 UTC m=+3991.167839335" observedRunningTime="2025-12-01 15:52:21.880304927 +0000 UTC m=+3992.398013755" watchObservedRunningTime="2025-12-01 15:52:21.898084288 +0000 UTC m=+3992.415793116" Dec 01 15:52:26 crc kubenswrapper[4637]: I1201 15:52:26.779911 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:26 crc kubenswrapper[4637]: I1201 15:52:26.780551 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:26 crc kubenswrapper[4637]: I1201 15:52:26.842761 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:26 crc kubenswrapper[4637]: I1201 15:52:26.960117 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:27 crc kubenswrapper[4637]: I1201 15:52:27.087580 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxxfd"] Dec 01 15:52:28 crc kubenswrapper[4637]: I1201 15:52:28.920293 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sxxfd" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerName="registry-server" containerID="cri-o://f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac" gracePeriod=2 Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.559252 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.659115 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-944wv\" (UniqueName: \"kubernetes.io/projected/bcb5ee75-beee-4755-8c3c-0fd26b096108-kube-api-access-944wv\") pod \"bcb5ee75-beee-4755-8c3c-0fd26b096108\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.659386 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-utilities\") pod \"bcb5ee75-beee-4755-8c3c-0fd26b096108\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.659512 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-catalog-content\") pod \"bcb5ee75-beee-4755-8c3c-0fd26b096108\" (UID: \"bcb5ee75-beee-4755-8c3c-0fd26b096108\") " Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.660492 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-utilities" (OuterVolumeSpecName: "utilities") pod "bcb5ee75-beee-4755-8c3c-0fd26b096108" (UID: "bcb5ee75-beee-4755-8c3c-0fd26b096108"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.690727 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb5ee75-beee-4755-8c3c-0fd26b096108-kube-api-access-944wv" (OuterVolumeSpecName: "kube-api-access-944wv") pod "bcb5ee75-beee-4755-8c3c-0fd26b096108" (UID: "bcb5ee75-beee-4755-8c3c-0fd26b096108"). InnerVolumeSpecName "kube-api-access-944wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.717751 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb5ee75-beee-4755-8c3c-0fd26b096108" (UID: "bcb5ee75-beee-4755-8c3c-0fd26b096108"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.761687 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-944wv\" (UniqueName: \"kubernetes.io/projected/bcb5ee75-beee-4755-8c3c-0fd26b096108-kube-api-access-944wv\") on node \"crc\" DevicePath \"\"" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.761750 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.761766 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb5ee75-beee-4755-8c3c-0fd26b096108-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.931870 4637 generic.go:334] "Generic (PLEG): container finished" podID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerID="f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac" exitCode=0 Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.931972 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxxfd" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.931992 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxxfd" event={"ID":"bcb5ee75-beee-4755-8c3c-0fd26b096108","Type":"ContainerDied","Data":"f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac"} Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.932890 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxxfd" event={"ID":"bcb5ee75-beee-4755-8c3c-0fd26b096108","Type":"ContainerDied","Data":"269d75e0832676e5b45eb9bf093b358d70488c3dbc4f7e94f46272a03276cf06"} Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.932956 4637 scope.go:117] "RemoveContainer" containerID="f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.957904 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxxfd"] Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.958706 4637 scope.go:117] "RemoveContainer" containerID="70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25" Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.968236 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sxxfd"] Dec 01 15:52:29 crc kubenswrapper[4637]: I1201 15:52:29.986949 4637 scope.go:117] "RemoveContainer" containerID="9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4" Dec 01 15:52:30 crc kubenswrapper[4637]: I1201 15:52:30.036175 4637 scope.go:117] "RemoveContainer" containerID="f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac" Dec 01 15:52:30 crc kubenswrapper[4637]: E1201 15:52:30.036807 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac\": container with ID starting with f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac not found: ID does not exist" containerID="f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac" Dec 01 15:52:30 crc kubenswrapper[4637]: I1201 15:52:30.036840 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac"} err="failed to get container status \"f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac\": rpc error: code = NotFound desc = could not find container \"f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac\": container with ID starting with f183183dc2d3386f45d4251341d45a9d31d76aa0cddb12bf06512cdf9c7404ac not found: ID does not exist" Dec 01 15:52:30 crc kubenswrapper[4637]: I1201 15:52:30.036859 4637 scope.go:117] "RemoveContainer" containerID="70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25" Dec 01 15:52:30 crc kubenswrapper[4637]: E1201 15:52:30.037140 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25\": container with ID starting with 70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25 not found: ID does not exist" containerID="70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25" Dec 01 15:52:30 crc kubenswrapper[4637]: I1201 15:52:30.037165 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25"} err="failed to get container status \"70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25\": rpc error: code = NotFound desc = could not find container \"70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25\": container with ID starting with 70904778e1f795482a410740de0d88e74f33e58cc960a380b592118c66fbca25 not found: ID does not exist" Dec 01 15:52:30 crc kubenswrapper[4637]: I1201 15:52:30.037180 4637 scope.go:117] "RemoveContainer" containerID="9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4" Dec 01 15:52:30 crc kubenswrapper[4637]: E1201 15:52:30.037348 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4\": container with ID starting with 9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4 not found: ID does not exist" containerID="9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4" Dec 01 15:52:30 crc kubenswrapper[4637]: I1201 15:52:30.037618 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4"} err="failed to get container status \"9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4\": rpc error: code = NotFound desc = could not find container \"9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4\": container with ID starting with 9c99b702e585db7262188868c7de81ac28efbeee893de2ee047b346b5dfd57c4 not found: ID does not exist" Dec 01 15:52:31 crc kubenswrapper[4637]: I1201 15:52:31.783817 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" path="/var/lib/kubelet/pods/bcb5ee75-beee-4755-8c3c-0fd26b096108/volumes" Dec 01 15:53:45 crc kubenswrapper[4637]: I1201 15:53:45.614081 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:53:45 crc kubenswrapper[4637]: I1201 15:53:45.614754 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:54:15 crc kubenswrapper[4637]: I1201 15:54:15.613756 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:54:15 crc kubenswrapper[4637]: I1201 15:54:15.615620 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:54:45 crc kubenswrapper[4637]: I1201 15:54:45.614513 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:54:45 crc kubenswrapper[4637]: I1201 15:54:45.615495 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:54:45 crc kubenswrapper[4637]: I1201 15:54:45.615575 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:54:45 crc kubenswrapper[4637]: I1201 15:54:45.616880 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a059050bdec537d7c8b69c1af325a13da15dcaef53005a3b465e81ccb049dfeb"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:54:45 crc kubenswrapper[4637]: I1201 15:54:45.616971 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://a059050bdec537d7c8b69c1af325a13da15dcaef53005a3b465e81ccb049dfeb" gracePeriod=600 Dec 01 15:54:46 crc kubenswrapper[4637]: I1201 15:54:46.224084 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="a059050bdec537d7c8b69c1af325a13da15dcaef53005a3b465e81ccb049dfeb" exitCode=0 Dec 01 15:54:46 crc kubenswrapper[4637]: I1201 15:54:46.224444 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"a059050bdec537d7c8b69c1af325a13da15dcaef53005a3b465e81ccb049dfeb"} Dec 01 15:54:46 crc kubenswrapper[4637]: I1201 15:54:46.224476 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c"} Dec 01 15:54:46 crc kubenswrapper[4637]: I1201 15:54:46.224498 4637 scope.go:117] "RemoveContainer" containerID="467f5d229dbd6bb3deead9ca069415ba23d31913a13628ece0e87c28c929b1b8" Dec 01 15:55:27 crc kubenswrapper[4637]: I1201 15:55:27.616138 4637 generic.go:334] "Generic (PLEG): container finished" podID="151da5f8-6a6e-4d06-b6a5-de2982ed8da5" containerID="abf8047e5f00122a2f25f305c92c84829e664a906d9ce0905f3bab4e13b783cf" exitCode=0 Dec 01 15:55:27 crc kubenswrapper[4637]: I1201 15:55:27.616202 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"151da5f8-6a6e-4d06-b6a5-de2982ed8da5","Type":"ContainerDied","Data":"abf8047e5f00122a2f25f305c92c84829e664a906d9ce0905f3bab4e13b783cf"} Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.493439 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618103 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config-secret\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618530 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618628 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-workdir\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618703 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ca-certs\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618743 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ssh-key\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618773 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-config-data\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618814 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzl79\" (UniqueName: \"kubernetes.io/projected/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-kube-api-access-gzl79\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618843 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.618871 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-temporary\") pod \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\" (UID: \"151da5f8-6a6e-4d06-b6a5-de2982ed8da5\") " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.620201 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.620316 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-config-data" (OuterVolumeSpecName: "config-data") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.626871 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.628238 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-kube-api-access-gzl79" (OuterVolumeSpecName: "kube-api-access-gzl79") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "kube-api-access-gzl79". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.632020 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.638647 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"151da5f8-6a6e-4d06-b6a5-de2982ed8da5","Type":"ContainerDied","Data":"4bb86a95f87fb38398d89480ea85ef3afb4fc35b3442cc54a08d3557a40520b0"} Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.638873 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb86a95f87fb38398d89480ea85ef3afb4fc35b3442cc54a08d3557a40520b0" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.639032 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.657222 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.666913 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.672879 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.697082 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "151da5f8-6a6e-4d06-b6a5-de2982ed8da5" (UID: "151da5f8-6a6e-4d06-b6a5-de2982ed8da5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.721702 4637 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.721750 4637 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.721761 4637 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.721775 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.721785 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzl79\" (UniqueName: \"kubernetes.io/projected/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-kube-api-access-gzl79\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.721796 4637 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.721807 4637 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.721817 4637 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/151da5f8-6a6e-4d06-b6a5-de2982ed8da5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.722180 4637 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.745265 4637 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 01 15:55:29 crc kubenswrapper[4637]: I1201 15:55:29.824086 4637 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.916815 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9spm"] Dec 01 15:55:36 crc kubenswrapper[4637]: E1201 15:55:36.918076 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151da5f8-6a6e-4d06-b6a5-de2982ed8da5" containerName="tempest-tests-tempest-tests-runner" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.918095 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="151da5f8-6a6e-4d06-b6a5-de2982ed8da5" containerName="tempest-tests-tempest-tests-runner" Dec 01 15:55:36 crc kubenswrapper[4637]: E1201 15:55:36.918143 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerName="extract-utilities" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.918151 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerName="extract-utilities" Dec 01 15:55:36 crc kubenswrapper[4637]: E1201 15:55:36.918173 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerName="extract-content" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.918181 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerName="extract-content" Dec 01 15:55:36 crc kubenswrapper[4637]: E1201 15:55:36.918199 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerName="registry-server" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.918207 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerName="registry-server" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.918466 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="151da5f8-6a6e-4d06-b6a5-de2982ed8da5" containerName="tempest-tests-tempest-tests-runner" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.918483 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb5ee75-beee-4755-8c3c-0fd26b096108" containerName="registry-server" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.920511 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:36 crc kubenswrapper[4637]: I1201 15:55:36.945764 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9spm"] Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.073476 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-catalog-content\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.073859 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-utilities\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.073981 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phf4q\" (UniqueName: \"kubernetes.io/projected/23d95adc-2953-47f5-bf61-15b5eb73fe52-kube-api-access-phf4q\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.176173 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-utilities\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.176261 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phf4q\" (UniqueName: \"kubernetes.io/projected/23d95adc-2953-47f5-bf61-15b5eb73fe52-kube-api-access-phf4q\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.176389 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-catalog-content\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.176762 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-utilities\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.177305 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-catalog-content\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.199986 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phf4q\" (UniqueName: \"kubernetes.io/projected/23d95adc-2953-47f5-bf61-15b5eb73fe52-kube-api-access-phf4q\") pod \"redhat-operators-k9spm\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.244332 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:37 crc kubenswrapper[4637]: I1201 15:55:37.764397 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9spm"] Dec 01 15:55:38 crc kubenswrapper[4637]: I1201 15:55:38.732657 4637 generic.go:334] "Generic (PLEG): container finished" podID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerID="526aa2b7611fd87ae19098055280b07b32075faa1d70c9d3d6242551d37762ae" exitCode=0 Dec 01 15:55:38 crc kubenswrapper[4637]: I1201 15:55:38.732714 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9spm" event={"ID":"23d95adc-2953-47f5-bf61-15b5eb73fe52","Type":"ContainerDied","Data":"526aa2b7611fd87ae19098055280b07b32075faa1d70c9d3d6242551d37762ae"} Dec 01 15:55:38 crc kubenswrapper[4637]: I1201 15:55:38.732921 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9spm" event={"ID":"23d95adc-2953-47f5-bf61-15b5eb73fe52","Type":"ContainerStarted","Data":"869f4ac65fd617d1de4e0577073cf6fa3358df6d7eb3ce52d8183c7ac33f2694"} Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.368487 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.370742 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.377697 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.378091 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-4wp8n" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.545021 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d820189d-4832-48ee-93e1-d501b8ef91b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.545343 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdlg\" (UniqueName: \"kubernetes.io/projected/d820189d-4832-48ee-93e1-d501b8ef91b8-kube-api-access-kqdlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d820189d-4832-48ee-93e1-d501b8ef91b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.646668 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqdlg\" (UniqueName: \"kubernetes.io/projected/d820189d-4832-48ee-93e1-d501b8ef91b8-kube-api-access-kqdlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d820189d-4832-48ee-93e1-d501b8ef91b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.646804 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d820189d-4832-48ee-93e1-d501b8ef91b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.647910 4637 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d820189d-4832-48ee-93e1-d501b8ef91b8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.667498 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqdlg\" (UniqueName: \"kubernetes.io/projected/d820189d-4832-48ee-93e1-d501b8ef91b8-kube-api-access-kqdlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d820189d-4832-48ee-93e1-d501b8ef91b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.675000 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d820189d-4832-48ee-93e1-d501b8ef91b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:40 crc kubenswrapper[4637]: I1201 15:55:40.694274 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 15:55:41 crc kubenswrapper[4637]: I1201 15:55:41.157548 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 15:55:41 crc kubenswrapper[4637]: W1201 15:55:41.180690 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd820189d_4832_48ee_93e1_d501b8ef91b8.slice/crio-1b2060951bfd9f9813ba3ad863571e1b3462f2b3135cf1238460df4331550e5b WatchSource:0}: Error finding container 1b2060951bfd9f9813ba3ad863571e1b3462f2b3135cf1238460df4331550e5b: Status 404 returned error can't find the container with id 1b2060951bfd9f9813ba3ad863571e1b3462f2b3135cf1238460df4331550e5b Dec 01 15:55:41 crc kubenswrapper[4637]: I1201 15:55:41.782596 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d820189d-4832-48ee-93e1-d501b8ef91b8","Type":"ContainerStarted","Data":"1b2060951bfd9f9813ba3ad863571e1b3462f2b3135cf1238460df4331550e5b"} Dec 01 15:55:43 crc kubenswrapper[4637]: I1201 15:55:43.791750 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d820189d-4832-48ee-93e1-d501b8ef91b8","Type":"ContainerStarted","Data":"70df4f3689f1c25bbdee5da1b742bf770cd9abc7710b2ef243f4623670cda6e7"} Dec 01 15:55:43 crc kubenswrapper[4637]: I1201 15:55:43.818438 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.5638334600000001 podStartE2EDuration="3.818417241s" podCreationTimestamp="2025-12-01 15:55:40 +0000 UTC" firstStartedPulling="2025-12-01 15:55:41.184371716 +0000 UTC m=+4191.702080544" lastFinishedPulling="2025-12-01 15:55:43.438955497 +0000 UTC m=+4193.956664325" observedRunningTime="2025-12-01 15:55:43.810715532 +0000 UTC m=+4194.328424370" watchObservedRunningTime="2025-12-01 15:55:43.818417241 +0000 UTC m=+4194.336126069" Dec 01 15:55:50 crc kubenswrapper[4637]: I1201 15:55:50.926161 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9spm" event={"ID":"23d95adc-2953-47f5-bf61-15b5eb73fe52","Type":"ContainerStarted","Data":"98158f7dd394c95667aa76efb73a106f36f9f328036e18b94c65a6a288d4f25e"} Dec 01 15:55:52 crc kubenswrapper[4637]: I1201 15:55:52.947555 4637 generic.go:334] "Generic (PLEG): container finished" podID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerID="98158f7dd394c95667aa76efb73a106f36f9f328036e18b94c65a6a288d4f25e" exitCode=0 Dec 01 15:55:52 crc kubenswrapper[4637]: I1201 15:55:52.947659 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9spm" event={"ID":"23d95adc-2953-47f5-bf61-15b5eb73fe52","Type":"ContainerDied","Data":"98158f7dd394c95667aa76efb73a106f36f9f328036e18b94c65a6a288d4f25e"} Dec 01 15:55:54 crc kubenswrapper[4637]: I1201 15:55:54.968145 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9spm" event={"ID":"23d95adc-2953-47f5-bf61-15b5eb73fe52","Type":"ContainerStarted","Data":"cbeafdbb93fabc63743b577b21764dc0da5558e607fa2d570c4228b23500e0ec"} Dec 01 15:55:55 crc kubenswrapper[4637]: I1201 15:55:54.999645 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9spm" podStartSLOduration=3.670908862 podStartE2EDuration="18.999625351s" podCreationTimestamp="2025-12-01 15:55:36 +0000 UTC" firstStartedPulling="2025-12-01 15:55:38.73615861 +0000 UTC m=+4189.253867448" lastFinishedPulling="2025-12-01 15:55:54.064875109 +0000 UTC m=+4204.582583937" observedRunningTime="2025-12-01 15:55:54.988658115 +0000 UTC m=+4205.506366943" watchObservedRunningTime="2025-12-01 15:55:54.999625351 +0000 UTC m=+4205.517334179" Dec 01 15:55:57 crc kubenswrapper[4637]: I1201 15:55:57.245437 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:57 crc kubenswrapper[4637]: I1201 15:55:57.246075 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:55:58 crc kubenswrapper[4637]: I1201 15:55:58.289389 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9spm" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="registry-server" probeResult="failure" output=< Dec 01 15:55:58 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 15:55:58 crc kubenswrapper[4637]: > Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.033475 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6cqvd/must-gather-h6dfh"] Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.036520 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.038805 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6cqvd"/"openshift-service-ca.crt" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.038893 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6cqvd"/"kube-root-ca.crt" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.039286 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6cqvd"/"default-dockercfg-95bl4" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.076421 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-must-gather-output\") pod \"must-gather-h6dfh\" (UID: \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\") " pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.076997 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzkxh\" (UniqueName: \"kubernetes.io/projected/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-kube-api-access-tzkxh\") pod \"must-gather-h6dfh\" (UID: \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\") " pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.081047 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6cqvd/must-gather-h6dfh"] Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.180480 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-must-gather-output\") pod \"must-gather-h6dfh\" (UID: \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\") " pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.180593 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzkxh\" (UniqueName: \"kubernetes.io/projected/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-kube-api-access-tzkxh\") pod \"must-gather-h6dfh\" (UID: \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\") " pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.181757 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-must-gather-output\") pod \"must-gather-h6dfh\" (UID: \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\") " pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.232140 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzkxh\" (UniqueName: \"kubernetes.io/projected/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-kube-api-access-tzkxh\") pod \"must-gather-h6dfh\" (UID: \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\") " pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 15:56:05 crc kubenswrapper[4637]: I1201 15:56:05.375453 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 15:56:06 crc kubenswrapper[4637]: I1201 15:56:06.013514 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6cqvd/must-gather-h6dfh"] Dec 01 15:56:06 crc kubenswrapper[4637]: W1201 15:56:06.015832 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6a8c17b_c35c_4c91_80a4_db1f0b8511ea.slice/crio-2bf9eedb2d5fa1ddd143fac610a10501f54061cf68141e9cc56462e5e807567a WatchSource:0}: Error finding container 2bf9eedb2d5fa1ddd143fac610a10501f54061cf68141e9cc56462e5e807567a: Status 404 returned error can't find the container with id 2bf9eedb2d5fa1ddd143fac610a10501f54061cf68141e9cc56462e5e807567a Dec 01 15:56:06 crc kubenswrapper[4637]: I1201 15:56:06.065839 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" event={"ID":"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea","Type":"ContainerStarted","Data":"2bf9eedb2d5fa1ddd143fac610a10501f54061cf68141e9cc56462e5e807567a"} Dec 01 15:56:07 crc kubenswrapper[4637]: I1201 15:56:07.307142 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:56:07 crc kubenswrapper[4637]: I1201 15:56:07.381297 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 15:56:07 crc kubenswrapper[4637]: I1201 15:56:07.938741 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9spm"] Dec 01 15:56:08 crc kubenswrapper[4637]: I1201 15:56:08.121857 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldz5j"] Dec 01 15:56:08 crc kubenswrapper[4637]: I1201 15:56:08.122414 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ldz5j" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerName="registry-server" containerID="cri-o://a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e" gracePeriod=2 Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.034324 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.112920 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-utilities\") pod \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.113021 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-catalog-content\") pod \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.113154 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q972p\" (UniqueName: \"kubernetes.io/projected/5efdb983-9c61-4647-9f5b-aad26de5b2d6-kube-api-access-q972p\") pod \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\" (UID: \"5efdb983-9c61-4647-9f5b-aad26de5b2d6\") " Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.115047 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-utilities" (OuterVolumeSpecName: "utilities") pod "5efdb983-9c61-4647-9f5b-aad26de5b2d6" (UID: "5efdb983-9c61-4647-9f5b-aad26de5b2d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.115695 4637 generic.go:334] "Generic (PLEG): container finished" podID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerID="a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e" exitCode=0 Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.116167 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldz5j" event={"ID":"5efdb983-9c61-4647-9f5b-aad26de5b2d6","Type":"ContainerDied","Data":"a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e"} Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.116203 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldz5j" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.116249 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldz5j" event={"ID":"5efdb983-9c61-4647-9f5b-aad26de5b2d6","Type":"ContainerDied","Data":"22be1858d973634af1baf43d99eb803ad39fbb906cc551568f00187532f87db1"} Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.116277 4637 scope.go:117] "RemoveContainer" containerID="a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.124565 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5efdb983-9c61-4647-9f5b-aad26de5b2d6-kube-api-access-q972p" (OuterVolumeSpecName: "kube-api-access-q972p") pod "5efdb983-9c61-4647-9f5b-aad26de5b2d6" (UID: "5efdb983-9c61-4647-9f5b-aad26de5b2d6"). InnerVolumeSpecName "kube-api-access-q972p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.200886 4637 scope.go:117] "RemoveContainer" containerID="d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.213402 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5efdb983-9c61-4647-9f5b-aad26de5b2d6" (UID: "5efdb983-9c61-4647-9f5b-aad26de5b2d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.216795 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q972p\" (UniqueName: \"kubernetes.io/projected/5efdb983-9c61-4647-9f5b-aad26de5b2d6-kube-api-access-q972p\") on node \"crc\" DevicePath \"\"" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.216924 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.217039 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efdb983-9c61-4647-9f5b-aad26de5b2d6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.250174 4637 scope.go:117] "RemoveContainer" containerID="b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.320316 4637 scope.go:117] "RemoveContainer" containerID="a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e" Dec 01 15:56:09 crc kubenswrapper[4637]: E1201 15:56:09.321376 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e\": container with ID starting with a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e not found: ID does not exist" containerID="a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.321423 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e"} err="failed to get container status \"a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e\": rpc error: code = NotFound desc = could not find container \"a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e\": container with ID starting with a23404b80655a6203235b36624b148493d4982ec39ed5a60a9f9aab68209b70e not found: ID does not exist" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.321452 4637 scope.go:117] "RemoveContainer" containerID="d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375" Dec 01 15:56:09 crc kubenswrapper[4637]: E1201 15:56:09.321694 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375\": container with ID starting with d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375 not found: ID does not exist" containerID="d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.321711 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375"} err="failed to get container status \"d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375\": rpc error: code = NotFound desc = could not find container \"d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375\": container with ID starting with d05f9a1a1cd4298df0a2e20f9454aaebb6d3314ddfac558ffafb92655340e375 not found: ID does not exist" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.321731 4637 scope.go:117] "RemoveContainer" containerID="b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8" Dec 01 15:56:09 crc kubenswrapper[4637]: E1201 15:56:09.324193 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8\": container with ID starting with b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8 not found: ID does not exist" containerID="b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.324255 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8"} err="failed to get container status \"b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8\": rpc error: code = NotFound desc = could not find container \"b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8\": container with ID starting with b177e50a094493202deac264f262e04c646b87780c2a05247bfa667b74dd24e8 not found: ID does not exist" Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.485696 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldz5j"] Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.500110 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ldz5j"] Dec 01 15:56:09 crc kubenswrapper[4637]: I1201 15:56:09.790801 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" path="/var/lib/kubelet/pods/5efdb983-9c61-4647-9f5b-aad26de5b2d6/volumes" Dec 01 15:56:14 crc kubenswrapper[4637]: I1201 15:56:14.193769 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" event={"ID":"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea","Type":"ContainerStarted","Data":"a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868"} Dec 01 15:56:15 crc kubenswrapper[4637]: I1201 15:56:15.212362 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" event={"ID":"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea","Type":"ContainerStarted","Data":"1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8"} Dec 01 15:56:15 crc kubenswrapper[4637]: I1201 15:56:15.241345 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" podStartSLOduration=2.475280474 podStartE2EDuration="10.241315944s" podCreationTimestamp="2025-12-01 15:56:05 +0000 UTC" firstStartedPulling="2025-12-01 15:56:06.024602627 +0000 UTC m=+4216.542311455" lastFinishedPulling="2025-12-01 15:56:13.790638097 +0000 UTC m=+4224.308346925" observedRunningTime="2025-12-01 15:56:15.22635346 +0000 UTC m=+4225.744062308" watchObservedRunningTime="2025-12-01 15:56:15.241315944 +0000 UTC m=+4225.759024772" Dec 01 15:56:18 crc kubenswrapper[4637]: I1201 15:56:18.822968 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-r9hp6"] Dec 01 15:56:18 crc kubenswrapper[4637]: E1201 15:56:18.823856 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerName="extract-utilities" Dec 01 15:56:18 crc kubenswrapper[4637]: I1201 15:56:18.823871 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerName="extract-utilities" Dec 01 15:56:18 crc kubenswrapper[4637]: E1201 15:56:18.823881 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerName="registry-server" Dec 01 15:56:18 crc kubenswrapper[4637]: I1201 15:56:18.823887 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerName="registry-server" Dec 01 15:56:18 crc kubenswrapper[4637]: E1201 15:56:18.823912 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerName="extract-content" Dec 01 15:56:18 crc kubenswrapper[4637]: I1201 15:56:18.823919 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerName="extract-content" Dec 01 15:56:18 crc kubenswrapper[4637]: I1201 15:56:18.824152 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="5efdb983-9c61-4647-9f5b-aad26de5b2d6" containerName="registry-server" Dec 01 15:56:18 crc kubenswrapper[4637]: I1201 15:56:18.824821 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:56:18 crc kubenswrapper[4637]: I1201 15:56:18.927165 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw9p4\" (UniqueName: \"kubernetes.io/projected/67691361-1100-445e-a9ce-0c78d380976f-kube-api-access-vw9p4\") pod \"crc-debug-r9hp6\" (UID: \"67691361-1100-445e-a9ce-0c78d380976f\") " pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:56:18 crc kubenswrapper[4637]: I1201 15:56:18.927540 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67691361-1100-445e-a9ce-0c78d380976f-host\") pod \"crc-debug-r9hp6\" (UID: \"67691361-1100-445e-a9ce-0c78d380976f\") " pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:56:19 crc kubenswrapper[4637]: I1201 15:56:19.029583 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw9p4\" (UniqueName: \"kubernetes.io/projected/67691361-1100-445e-a9ce-0c78d380976f-kube-api-access-vw9p4\") pod \"crc-debug-r9hp6\" (UID: \"67691361-1100-445e-a9ce-0c78d380976f\") " pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:56:19 crc kubenswrapper[4637]: I1201 15:56:19.029697 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67691361-1100-445e-a9ce-0c78d380976f-host\") pod \"crc-debug-r9hp6\" (UID: \"67691361-1100-445e-a9ce-0c78d380976f\") " pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:56:19 crc kubenswrapper[4637]: I1201 15:56:19.029787 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67691361-1100-445e-a9ce-0c78d380976f-host\") pod \"crc-debug-r9hp6\" (UID: \"67691361-1100-445e-a9ce-0c78d380976f\") " pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:56:19 crc kubenswrapper[4637]: I1201 15:56:19.051138 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw9p4\" (UniqueName: \"kubernetes.io/projected/67691361-1100-445e-a9ce-0c78d380976f-kube-api-access-vw9p4\") pod \"crc-debug-r9hp6\" (UID: \"67691361-1100-445e-a9ce-0c78d380976f\") " pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:56:19 crc kubenswrapper[4637]: I1201 15:56:19.141646 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:56:19 crc kubenswrapper[4637]: W1201 15:56:19.177509 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67691361_1100_445e_a9ce_0c78d380976f.slice/crio-d6d8b88e56bf9555d950bd8132e604e16e958921e21ef17953d9ce69a4790e33 WatchSource:0}: Error finding container d6d8b88e56bf9555d950bd8132e604e16e958921e21ef17953d9ce69a4790e33: Status 404 returned error can't find the container with id d6d8b88e56bf9555d950bd8132e604e16e958921e21ef17953d9ce69a4790e33 Dec 01 15:56:19 crc kubenswrapper[4637]: I1201 15:56:19.251737 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" event={"ID":"67691361-1100-445e-a9ce-0c78d380976f","Type":"ContainerStarted","Data":"d6d8b88e56bf9555d950bd8132e604e16e958921e21ef17953d9ce69a4790e33"} Dec 01 15:56:33 crc kubenswrapper[4637]: I1201 15:56:33.431657 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" event={"ID":"67691361-1100-445e-a9ce-0c78d380976f","Type":"ContainerStarted","Data":"c98291f8cc5b7ede556b99040e15f058834182a2922a01f2f06f9252fb334131"} Dec 01 15:56:33 crc kubenswrapper[4637]: I1201 15:56:33.450679 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" podStartSLOduration=2.581557813 podStartE2EDuration="15.450654527s" podCreationTimestamp="2025-12-01 15:56:18 +0000 UTC" firstStartedPulling="2025-12-01 15:56:19.180166109 +0000 UTC m=+4229.697874937" lastFinishedPulling="2025-12-01 15:56:32.049262823 +0000 UTC m=+4242.566971651" observedRunningTime="2025-12-01 15:56:33.443690599 +0000 UTC m=+4243.961399427" watchObservedRunningTime="2025-12-01 15:56:33.450654527 +0000 UTC m=+4243.968363365" Dec 01 15:56:45 crc kubenswrapper[4637]: I1201 15:56:45.613898 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:56:45 crc kubenswrapper[4637]: I1201 15:56:45.614635 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:57:15 crc kubenswrapper[4637]: I1201 15:57:15.613781 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:57:15 crc kubenswrapper[4637]: I1201 15:57:15.616041 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:57:24 crc kubenswrapper[4637]: I1201 15:57:24.157489 4637 generic.go:334] "Generic (PLEG): container finished" podID="67691361-1100-445e-a9ce-0c78d380976f" containerID="c98291f8cc5b7ede556b99040e15f058834182a2922a01f2f06f9252fb334131" exitCode=0 Dec 01 15:57:24 crc kubenswrapper[4637]: I1201 15:57:24.157581 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" event={"ID":"67691361-1100-445e-a9ce-0c78d380976f","Type":"ContainerDied","Data":"c98291f8cc5b7ede556b99040e15f058834182a2922a01f2f06f9252fb334131"} Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.287819 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.322917 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-r9hp6"] Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.327156 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67691361-1100-445e-a9ce-0c78d380976f-host\") pod \"67691361-1100-445e-a9ce-0c78d380976f\" (UID: \"67691361-1100-445e-a9ce-0c78d380976f\") " Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.327213 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw9p4\" (UniqueName: \"kubernetes.io/projected/67691361-1100-445e-a9ce-0c78d380976f-kube-api-access-vw9p4\") pod \"67691361-1100-445e-a9ce-0c78d380976f\" (UID: \"67691361-1100-445e-a9ce-0c78d380976f\") " Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.327617 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67691361-1100-445e-a9ce-0c78d380976f-host" (OuterVolumeSpecName: "host") pod "67691361-1100-445e-a9ce-0c78d380976f" (UID: "67691361-1100-445e-a9ce-0c78d380976f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.327966 4637 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67691361-1100-445e-a9ce-0c78d380976f-host\") on node \"crc\" DevicePath \"\"" Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.331514 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-r9hp6"] Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.346290 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67691361-1100-445e-a9ce-0c78d380976f-kube-api-access-vw9p4" (OuterVolumeSpecName: "kube-api-access-vw9p4") pod "67691361-1100-445e-a9ce-0c78d380976f" (UID: "67691361-1100-445e-a9ce-0c78d380976f"). InnerVolumeSpecName "kube-api-access-vw9p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.429805 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw9p4\" (UniqueName: \"kubernetes.io/projected/67691361-1100-445e-a9ce-0c78d380976f-kube-api-access-vw9p4\") on node \"crc\" DevicePath \"\"" Dec 01 15:57:25 crc kubenswrapper[4637]: I1201 15:57:25.784235 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67691361-1100-445e-a9ce-0c78d380976f" path="/var/lib/kubelet/pods/67691361-1100-445e-a9ce-0c78d380976f/volumes" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.177146 4637 scope.go:117] "RemoveContainer" containerID="c98291f8cc5b7ede556b99040e15f058834182a2922a01f2f06f9252fb334131" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.177301 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-r9hp6" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.554184 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-hq57z"] Dec 01 15:57:26 crc kubenswrapper[4637]: E1201 15:57:26.554692 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67691361-1100-445e-a9ce-0c78d380976f" containerName="container-00" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.554709 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="67691361-1100-445e-a9ce-0c78d380976f" containerName="container-00" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.555088 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="67691361-1100-445e-a9ce-0c78d380976f" containerName="container-00" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.555845 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.656799 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtzr\" (UniqueName: \"kubernetes.io/projected/f4e106ed-350e-4c64-8497-7f65fc05559c-kube-api-access-pxtzr\") pod \"crc-debug-hq57z\" (UID: \"f4e106ed-350e-4c64-8497-7f65fc05559c\") " pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.656866 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4e106ed-350e-4c64-8497-7f65fc05559c-host\") pod \"crc-debug-hq57z\" (UID: \"f4e106ed-350e-4c64-8497-7f65fc05559c\") " pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.759282 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtzr\" (UniqueName: \"kubernetes.io/projected/f4e106ed-350e-4c64-8497-7f65fc05559c-kube-api-access-pxtzr\") pod \"crc-debug-hq57z\" (UID: \"f4e106ed-350e-4c64-8497-7f65fc05559c\") " pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.759840 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4e106ed-350e-4c64-8497-7f65fc05559c-host\") pod \"crc-debug-hq57z\" (UID: \"f4e106ed-350e-4c64-8497-7f65fc05559c\") " pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.759924 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4e106ed-350e-4c64-8497-7f65fc05559c-host\") pod \"crc-debug-hq57z\" (UID: \"f4e106ed-350e-4c64-8497-7f65fc05559c\") " pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.781285 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtzr\" (UniqueName: \"kubernetes.io/projected/f4e106ed-350e-4c64-8497-7f65fc05559c-kube-api-access-pxtzr\") pod \"crc-debug-hq57z\" (UID: \"f4e106ed-350e-4c64-8497-7f65fc05559c\") " pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:26 crc kubenswrapper[4637]: I1201 15:57:26.879752 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:27 crc kubenswrapper[4637]: I1201 15:57:27.191186 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/crc-debug-hq57z" event={"ID":"f4e106ed-350e-4c64-8497-7f65fc05559c","Type":"ContainerStarted","Data":"39be7bf1a8e834cb27430c546947d20fc11cad8b99314fa9a0b6d0e6ac452ae5"} Dec 01 15:57:27 crc kubenswrapper[4637]: I1201 15:57:27.191560 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/crc-debug-hq57z" event={"ID":"f4e106ed-350e-4c64-8497-7f65fc05559c","Type":"ContainerStarted","Data":"c991169676bd7f65b8429bc39d3f1e486276ebd3b7291fdeebc8f55a3fa7a5f6"} Dec 01 15:57:27 crc kubenswrapper[4637]: I1201 15:57:27.216143 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6cqvd/crc-debug-hq57z" podStartSLOduration=1.216083247 podStartE2EDuration="1.216083247s" podCreationTimestamp="2025-12-01 15:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 15:57:27.206847117 +0000 UTC m=+4297.724555945" watchObservedRunningTime="2025-12-01 15:57:27.216083247 +0000 UTC m=+4297.733792075" Dec 01 15:57:28 crc kubenswrapper[4637]: I1201 15:57:28.200431 4637 generic.go:334] "Generic (PLEG): container finished" podID="f4e106ed-350e-4c64-8497-7f65fc05559c" containerID="39be7bf1a8e834cb27430c546947d20fc11cad8b99314fa9a0b6d0e6ac452ae5" exitCode=0 Dec 01 15:57:28 crc kubenswrapper[4637]: I1201 15:57:28.201523 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/crc-debug-hq57z" event={"ID":"f4e106ed-350e-4c64-8497-7f65fc05559c","Type":"ContainerDied","Data":"39be7bf1a8e834cb27430c546947d20fc11cad8b99314fa9a0b6d0e6ac452ae5"} Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.386277 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.523540 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4e106ed-350e-4c64-8497-7f65fc05559c-host\") pod \"f4e106ed-350e-4c64-8497-7f65fc05559c\" (UID: \"f4e106ed-350e-4c64-8497-7f65fc05559c\") " Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.523632 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxtzr\" (UniqueName: \"kubernetes.io/projected/f4e106ed-350e-4c64-8497-7f65fc05559c-kube-api-access-pxtzr\") pod \"f4e106ed-350e-4c64-8497-7f65fc05559c\" (UID: \"f4e106ed-350e-4c64-8497-7f65fc05559c\") " Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.524232 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4e106ed-350e-4c64-8497-7f65fc05559c-host" (OuterVolumeSpecName: "host") pod "f4e106ed-350e-4c64-8497-7f65fc05559c" (UID: "f4e106ed-350e-4c64-8497-7f65fc05559c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.549297 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e106ed-350e-4c64-8497-7f65fc05559c-kube-api-access-pxtzr" (OuterVolumeSpecName: "kube-api-access-pxtzr") pod "f4e106ed-350e-4c64-8497-7f65fc05559c" (UID: "f4e106ed-350e-4c64-8497-7f65fc05559c"). InnerVolumeSpecName "kube-api-access-pxtzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.625333 4637 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4e106ed-350e-4c64-8497-7f65fc05559c-host\") on node \"crc\" DevicePath \"\"" Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.625365 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxtzr\" (UniqueName: \"kubernetes.io/projected/f4e106ed-350e-4c64-8497-7f65fc05559c-kube-api-access-pxtzr\") on node \"crc\" DevicePath \"\"" Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.785679 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-hq57z"] Dec 01 15:57:29 crc kubenswrapper[4637]: I1201 15:57:29.785725 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-hq57z"] Dec 01 15:57:30 crc kubenswrapper[4637]: I1201 15:57:30.224247 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c991169676bd7f65b8429bc39d3f1e486276ebd3b7291fdeebc8f55a3fa7a5f6" Dec 01 15:57:30 crc kubenswrapper[4637]: I1201 15:57:30.224314 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-hq57z" Dec 01 15:57:30 crc kubenswrapper[4637]: I1201 15:57:30.948212 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-fdrrd"] Dec 01 15:57:30 crc kubenswrapper[4637]: E1201 15:57:30.949038 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e106ed-350e-4c64-8497-7f65fc05559c" containerName="container-00" Dec 01 15:57:30 crc kubenswrapper[4637]: I1201 15:57:30.949056 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e106ed-350e-4c64-8497-7f65fc05559c" containerName="container-00" Dec 01 15:57:30 crc kubenswrapper[4637]: I1201 15:57:30.949301 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e106ed-350e-4c64-8497-7f65fc05559c" containerName="container-00" Dec 01 15:57:30 crc kubenswrapper[4637]: I1201 15:57:30.950241 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:31 crc kubenswrapper[4637]: I1201 15:57:31.052953 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qs9\" (UniqueName: \"kubernetes.io/projected/cf93ad78-3a41-4ac3-8484-3e9daa70b538-kube-api-access-65qs9\") pod \"crc-debug-fdrrd\" (UID: \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\") " pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:31 crc kubenswrapper[4637]: I1201 15:57:31.053010 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf93ad78-3a41-4ac3-8484-3e9daa70b538-host\") pod \"crc-debug-fdrrd\" (UID: \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\") " pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:31 crc kubenswrapper[4637]: I1201 15:57:31.154728 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qs9\" (UniqueName: \"kubernetes.io/projected/cf93ad78-3a41-4ac3-8484-3e9daa70b538-kube-api-access-65qs9\") pod \"crc-debug-fdrrd\" (UID: \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\") " pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:31 crc kubenswrapper[4637]: I1201 15:57:31.154799 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf93ad78-3a41-4ac3-8484-3e9daa70b538-host\") pod \"crc-debug-fdrrd\" (UID: \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\") " pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:31 crc kubenswrapper[4637]: I1201 15:57:31.154971 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf93ad78-3a41-4ac3-8484-3e9daa70b538-host\") pod \"crc-debug-fdrrd\" (UID: \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\") " pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:31 crc kubenswrapper[4637]: I1201 15:57:31.179637 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qs9\" (UniqueName: \"kubernetes.io/projected/cf93ad78-3a41-4ac3-8484-3e9daa70b538-kube-api-access-65qs9\") pod \"crc-debug-fdrrd\" (UID: \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\") " pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:31 crc kubenswrapper[4637]: I1201 15:57:31.265731 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:31 crc kubenswrapper[4637]: I1201 15:57:31.784401 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e106ed-350e-4c64-8497-7f65fc05559c" path="/var/lib/kubelet/pods/f4e106ed-350e-4c64-8497-7f65fc05559c/volumes" Dec 01 15:57:32 crc kubenswrapper[4637]: I1201 15:57:32.243224 4637 generic.go:334] "Generic (PLEG): container finished" podID="cf93ad78-3a41-4ac3-8484-3e9daa70b538" containerID="23fde0f77ae1b51341740e8c27137a76259684154789d4ec7a2b93312c339513" exitCode=0 Dec 01 15:57:32 crc kubenswrapper[4637]: I1201 15:57:32.243278 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" event={"ID":"cf93ad78-3a41-4ac3-8484-3e9daa70b538","Type":"ContainerDied","Data":"23fde0f77ae1b51341740e8c27137a76259684154789d4ec7a2b93312c339513"} Dec 01 15:57:32 crc kubenswrapper[4637]: I1201 15:57:32.243318 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" event={"ID":"cf93ad78-3a41-4ac3-8484-3e9daa70b538","Type":"ContainerStarted","Data":"24d0f59b30f96576c2b54e25c5689cbd5f0ce5591c644f55b0b1a92fd9e52c12"} Dec 01 15:57:32 crc kubenswrapper[4637]: I1201 15:57:32.288657 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-fdrrd"] Dec 01 15:57:32 crc kubenswrapper[4637]: I1201 15:57:32.298305 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6cqvd/crc-debug-fdrrd"] Dec 01 15:57:33 crc kubenswrapper[4637]: I1201 15:57:33.470827 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:33 crc kubenswrapper[4637]: I1201 15:57:33.607079 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65qs9\" (UniqueName: \"kubernetes.io/projected/cf93ad78-3a41-4ac3-8484-3e9daa70b538-kube-api-access-65qs9\") pod \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\" (UID: \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\") " Dec 01 15:57:33 crc kubenswrapper[4637]: I1201 15:57:33.607183 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf93ad78-3a41-4ac3-8484-3e9daa70b538-host\") pod \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\" (UID: \"cf93ad78-3a41-4ac3-8484-3e9daa70b538\") " Dec 01 15:57:33 crc kubenswrapper[4637]: I1201 15:57:33.607407 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf93ad78-3a41-4ac3-8484-3e9daa70b538-host" (OuterVolumeSpecName: "host") pod "cf93ad78-3a41-4ac3-8484-3e9daa70b538" (UID: "cf93ad78-3a41-4ac3-8484-3e9daa70b538"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 15:57:33 crc kubenswrapper[4637]: I1201 15:57:33.607816 4637 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf93ad78-3a41-4ac3-8484-3e9daa70b538-host\") on node \"crc\" DevicePath \"\"" Dec 01 15:57:33 crc kubenswrapper[4637]: I1201 15:57:33.614400 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf93ad78-3a41-4ac3-8484-3e9daa70b538-kube-api-access-65qs9" (OuterVolumeSpecName: "kube-api-access-65qs9") pod "cf93ad78-3a41-4ac3-8484-3e9daa70b538" (UID: "cf93ad78-3a41-4ac3-8484-3e9daa70b538"). InnerVolumeSpecName "kube-api-access-65qs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 15:57:33 crc kubenswrapper[4637]: I1201 15:57:33.709448 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65qs9\" (UniqueName: \"kubernetes.io/projected/cf93ad78-3a41-4ac3-8484-3e9daa70b538-kube-api-access-65qs9\") on node \"crc\" DevicePath \"\"" Dec 01 15:57:33 crc kubenswrapper[4637]: I1201 15:57:33.781824 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf93ad78-3a41-4ac3-8484-3e9daa70b538" path="/var/lib/kubelet/pods/cf93ad78-3a41-4ac3-8484-3e9daa70b538/volumes" Dec 01 15:57:34 crc kubenswrapper[4637]: I1201 15:57:34.260611 4637 scope.go:117] "RemoveContainer" containerID="23fde0f77ae1b51341740e8c27137a76259684154789d4ec7a2b93312c339513" Dec 01 15:57:34 crc kubenswrapper[4637]: I1201 15:57:34.260965 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/crc-debug-fdrrd" Dec 01 15:57:45 crc kubenswrapper[4637]: I1201 15:57:45.613794 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 15:57:45 crc kubenswrapper[4637]: I1201 15:57:45.614253 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 15:57:45 crc kubenswrapper[4637]: I1201 15:57:45.614305 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 15:57:45 crc kubenswrapper[4637]: I1201 15:57:45.615085 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 15:57:45 crc kubenswrapper[4637]: I1201 15:57:45.615143 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" gracePeriod=600 Dec 01 15:57:45 crc kubenswrapper[4637]: E1201 15:57:45.762230 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:57:46 crc kubenswrapper[4637]: I1201 15:57:46.369506 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" exitCode=0 Dec 01 15:57:46 crc kubenswrapper[4637]: I1201 15:57:46.369574 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c"} Dec 01 15:57:46 crc kubenswrapper[4637]: I1201 15:57:46.369619 4637 scope.go:117] "RemoveContainer" containerID="a059050bdec537d7c8b69c1af325a13da15dcaef53005a3b465e81ccb049dfeb" Dec 01 15:57:46 crc kubenswrapper[4637]: I1201 15:57:46.370262 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:57:46 crc kubenswrapper[4637]: E1201 15:57:46.370586 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:57:58 crc kubenswrapper[4637]: I1201 15:57:58.771268 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:57:58 crc kubenswrapper[4637]: E1201 15:57:58.772080 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:58:01 crc kubenswrapper[4637]: I1201 15:58:01.285406 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c95959774-tk5fr_57082a3e-c5e1-4926-a5b1-306d0becae0c/barbican-api/0.log" Dec 01 15:58:01 crc kubenswrapper[4637]: I1201 15:58:01.428999 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c95959774-tk5fr_57082a3e-c5e1-4926-a5b1-306d0becae0c/barbican-api-log/0.log" Dec 01 15:58:01 crc kubenswrapper[4637]: I1201 15:58:01.591988 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6fcc69568b-hmqt6_4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04/barbican-keystone-listener-log/0.log" Dec 01 15:58:01 crc kubenswrapper[4637]: I1201 15:58:01.608783 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6fcc69568b-hmqt6_4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04/barbican-keystone-listener/0.log" Dec 01 15:58:01 crc kubenswrapper[4637]: I1201 15:58:01.777705 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5b77c96f-nz2mk_395c12b6-6b37-4ed6-93fb-65937fa99e65/barbican-worker/0.log" Dec 01 15:58:01 crc kubenswrapper[4637]: I1201 15:58:01.854698 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5b77c96f-nz2mk_395c12b6-6b37-4ed6-93fb-65937fa99e65/barbican-worker-log/0.log" Dec 01 15:58:02 crc kubenswrapper[4637]: I1201 15:58:02.111523 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r_290aad22-6654-4895-ae47-8651471b42e6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:02 crc kubenswrapper[4637]: I1201 15:58:02.125055 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd466a3c-d503-4718-a059-1cba9c618b07/ceilometer-central-agent/0.log" Dec 01 15:58:02 crc kubenswrapper[4637]: I1201 15:58:02.171752 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd466a3c-d503-4718-a059-1cba9c618b07/ceilometer-notification-agent/0.log" Dec 01 15:58:02 crc kubenswrapper[4637]: I1201 15:58:02.590718 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd466a3c-d503-4718-a059-1cba9c618b07/sg-core/0.log" Dec 01 15:58:02 crc kubenswrapper[4637]: I1201 15:58:02.614453 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd466a3c-d503-4718-a059-1cba9c618b07/proxy-httpd/0.log" Dec 01 15:58:02 crc kubenswrapper[4637]: I1201 15:58:02.714109 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8a976758-2d0a-43be-ad2f-69b00b2fec4a/cinder-api/0.log" Dec 01 15:58:03 crc kubenswrapper[4637]: I1201 15:58:03.141014 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a03f4cde-ebc1-46dd-9218-b3f073602fba/probe/0.log" Dec 01 15:58:03 crc kubenswrapper[4637]: I1201 15:58:03.185949 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a03f4cde-ebc1-46dd-9218-b3f073602fba/cinder-scheduler/0.log" Dec 01 15:58:03 crc kubenswrapper[4637]: I1201 15:58:03.213062 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8a976758-2d0a-43be-ad2f-69b00b2fec4a/cinder-api-log/0.log" Dec 01 15:58:03 crc kubenswrapper[4637]: I1201 15:58:03.472195 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-khhg5_8152b193-b04b-4380-9596-60c61cc82ef7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:03 crc kubenswrapper[4637]: I1201 15:58:03.517374 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-prh4q_d76c43c1-7a6f-41c6-b052-5363182c236c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:04 crc kubenswrapper[4637]: I1201 15:58:04.276860 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55fb7f8d4c-mcw8w_0e879137-dbe4-4b26-a4bc-21cd963dc5e9/init/0.log" Dec 01 15:58:04 crc kubenswrapper[4637]: I1201 15:58:04.544613 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55fb7f8d4c-mcw8w_0e879137-dbe4-4b26-a4bc-21cd963dc5e9/init/0.log" Dec 01 15:58:04 crc kubenswrapper[4637]: I1201 15:58:04.652530 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ph98p_0103da10-320d-4303-8498-e0f06d9e97f4/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:04 crc kubenswrapper[4637]: I1201 15:58:04.736617 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55fb7f8d4c-mcw8w_0e879137-dbe4-4b26-a4bc-21cd963dc5e9/dnsmasq-dns/0.log" Dec 01 15:58:04 crc kubenswrapper[4637]: I1201 15:58:04.914055 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_11dea63e-843a-4a51-9525-4cda961c167a/glance-log/0.log" Dec 01 15:58:04 crc kubenswrapper[4637]: I1201 15:58:04.970152 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_11dea63e-843a-4a51-9525-4cda961c167a/glance-httpd/0.log" Dec 01 15:58:05 crc kubenswrapper[4637]: I1201 15:58:05.206261 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_00582d1a-8f52-49ad-9adc-306f07c46255/glance-log/0.log" Dec 01 15:58:05 crc kubenswrapper[4637]: I1201 15:58:05.272620 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_00582d1a-8f52-49ad-9adc-306f07c46255/glance-httpd/0.log" Dec 01 15:58:05 crc kubenswrapper[4637]: I1201 15:58:05.505981 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-fcb665488-kvv69_269bc165-8fbc-4c63-84ef-96b74d44fc16/horizon/0.log" Dec 01 15:58:05 crc kubenswrapper[4637]: I1201 15:58:05.720679 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s548v_1e434aff-123b-42e2-8c40-c82c0bd5aabe/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:05 crc kubenswrapper[4637]: I1201 15:58:05.884078 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-fcb665488-kvv69_269bc165-8fbc-4c63-84ef-96b74d44fc16/horizon-log/0.log" Dec 01 15:58:05 crc kubenswrapper[4637]: I1201 15:58:05.942096 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-jmj6p_4144f6ad-f95f-4e2e-a9d3-003cdc5ef439/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:06 crc kubenswrapper[4637]: I1201 15:58:06.210183 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0062383a-47a6-4c14-bfeb-0ea63ac93305/kube-state-metrics/0.log" Dec 01 15:58:06 crc kubenswrapper[4637]: I1201 15:58:06.457478 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f896d59db-mf67s_0e2b0a1d-1624-43e9-8f38-9918fa4b0b85/keystone-api/0.log" Dec 01 15:58:06 crc kubenswrapper[4637]: I1201 15:58:06.533561 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp_6a16b3a0-82a0-4cc6-820a-6c084408566f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:07 crc kubenswrapper[4637]: I1201 15:58:07.127109 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m_01f54aa2-e74c-40e5-a386-da5ea69b918c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:07 crc kubenswrapper[4637]: I1201 15:58:07.467822 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-ff56c879c-9gwf6_391b6ea5-6446-4755-9075-904efff48769/neutron-httpd/0.log" Dec 01 15:58:07 crc kubenswrapper[4637]: I1201 15:58:07.804140 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-ff56c879c-9gwf6_391b6ea5-6446-4755-9075-904efff48769/neutron-api/0.log" Dec 01 15:58:08 crc kubenswrapper[4637]: I1201 15:58:08.478421 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1ca73521-4fd3-4ff2-b490-7e52488a96e4/nova-cell0-conductor-conductor/0.log" Dec 01 15:58:08 crc kubenswrapper[4637]: I1201 15:58:08.648024 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5903ea37-db92-4a76-afb4-14cfa23415d0/memcached/0.log" Dec 01 15:58:08 crc kubenswrapper[4637]: I1201 15:58:08.659613 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_47cc23aa-328f-4714-8407-cb7e62fa05db/nova-cell1-conductor-conductor/0.log" Dec 01 15:58:09 crc kubenswrapper[4637]: I1201 15:58:09.026427 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_cd4d95fc-d5be-40c9-bfae-3e1afaa2722d/nova-cell1-novncproxy-novncproxy/0.log" Dec 01 15:58:09 crc kubenswrapper[4637]: I1201 15:58:09.075572 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1f8bf954-7268-4bcf-b75b-d4d4bfa26e11/nova-api-log/0.log" Dec 01 15:58:09 crc kubenswrapper[4637]: I1201 15:58:09.142694 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-md4m5_30d902b2-5e9c-4431-a436-03edbc23458d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:09 crc kubenswrapper[4637]: I1201 15:58:09.247869 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1f8bf954-7268-4bcf-b75b-d4d4bfa26e11/nova-api-api/0.log" Dec 01 15:58:09 crc kubenswrapper[4637]: I1201 15:58:09.471791 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0d7cca51-3d70-47ac-b1f9-ed181a1d8826/nova-metadata-log/0.log" Dec 01 15:58:09 crc kubenswrapper[4637]: I1201 15:58:09.804619 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3cfe4e59-0e72-4440-b962-2f86664cb2d7/mysql-bootstrap/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.024422 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_264d91ff-c64c-4d65-bedc-4a11945042f0/nova-scheduler-scheduler/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.110012 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3cfe4e59-0e72-4440-b962-2f86664cb2d7/galera/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.124814 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3cfe4e59-0e72-4440-b962-2f86664cb2d7/mysql-bootstrap/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.291738 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dae9e33c-c07e-4c13-8104-d1310d91de8c/mysql-bootstrap/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.515960 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0d7cca51-3d70-47ac-b1f9-ed181a1d8826/nova-metadata-metadata/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.557601 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dae9e33c-c07e-4c13-8104-d1310d91de8c/mysql-bootstrap/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.589864 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dae9e33c-c07e-4c13-8104-d1310d91de8c/galera/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.619532 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_421d907a-c7b0-4109-8d01-e725459215b9/openstackclient/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.771253 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:58:10 crc kubenswrapper[4637]: E1201 15:58:10.771620 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.887009 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dhqng_8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e/ovn-controller/0.log" Dec 01 15:58:10 crc kubenswrapper[4637]: I1201 15:58:10.938518 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f4khp_bf139f21-cf2f-4ef1-9474-9c785a02053e/openstack-network-exporter/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.080958 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9pxbh_e4037f80-7861-4283-99d5-2b078ef3de4b/ovsdb-server-init/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.224578 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9pxbh_e4037f80-7861-4283-99d5-2b078ef3de4b/ovsdb-server-init/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.255733 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9pxbh_e4037f80-7861-4283-99d5-2b078ef3de4b/ovsdb-server/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.328421 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-n9fmc_07986dae-e60d-4809-88fe-cbd86b27ef81/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.331943 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9pxbh_e4037f80-7861-4283-99d5-2b078ef3de4b/ovs-vswitchd/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.480610 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5348dcbd-104a-4fff-9414-bb859f58fd52/openstack-network-exporter/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.586102 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5348dcbd-104a-4fff-9414-bb859f58fd52/ovn-northd/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.616767 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfeecd83-4225-4d76-8002-6593dc66ab4f/openstack-network-exporter/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.702568 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfeecd83-4225-4d76-8002-6593dc66ab4f/ovsdbserver-nb/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.822468 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_db3ed190-cdcc-4547-b48f-d09f6e881dfb/openstack-network-exporter/0.log" Dec 01 15:58:11 crc kubenswrapper[4637]: I1201 15:58:11.869117 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_db3ed190-cdcc-4547-b48f-d09f6e881dfb/ovsdbserver-sb/0.log" Dec 01 15:58:12 crc kubenswrapper[4637]: I1201 15:58:12.069413 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85bcc8d488-896bl_74bff823-e398-4a06-a477-d98060ddad39/placement-api/0.log" Dec 01 15:58:12 crc kubenswrapper[4637]: I1201 15:58:12.531255 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_64730b89-aa49-4741-b050-c283d98626c9/setup-container/0.log" Dec 01 15:58:12 crc kubenswrapper[4637]: I1201 15:58:12.667252 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85bcc8d488-896bl_74bff823-e398-4a06-a477-d98060ddad39/placement-log/0.log" Dec 01 15:58:12 crc kubenswrapper[4637]: I1201 15:58:12.731596 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_64730b89-aa49-4741-b050-c283d98626c9/setup-container/0.log" Dec 01 15:58:12 crc kubenswrapper[4637]: I1201 15:58:12.824140 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_64730b89-aa49-4741-b050-c283d98626c9/rabbitmq/0.log" Dec 01 15:58:12 crc kubenswrapper[4637]: I1201 15:58:12.866247 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_966262a4-bd2b-40fd-b052-ce2bd68485b5/setup-container/0.log" Dec 01 15:58:13 crc kubenswrapper[4637]: I1201 15:58:13.030770 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_966262a4-bd2b-40fd-b052-ce2bd68485b5/setup-container/0.log" Dec 01 15:58:13 crc kubenswrapper[4637]: I1201 15:58:13.122334 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq_d3b484f7-438b-4ea9-9529-9ba5a49fca84/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:13 crc kubenswrapper[4637]: I1201 15:58:13.132559 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_966262a4-bd2b-40fd-b052-ce2bd68485b5/rabbitmq/0.log" Dec 01 15:58:13 crc kubenswrapper[4637]: I1201 15:58:13.267363 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mlpn5_24361437-f549-45e9-af51-6a842e4bc82e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:13 crc kubenswrapper[4637]: I1201 15:58:13.355463 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9_f42fd0ac-99c4-49ed-87d0-fe00a580a2ea/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:13 crc kubenswrapper[4637]: I1201 15:58:13.398904 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5ctt6_b2f971f6-6729-4d92-9849-2c03e6d0747b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:13 crc kubenswrapper[4637]: I1201 15:58:13.564796 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cmb78_ff7c15eb-bee2-412f-8689-ba46478d7b33/ssh-known-hosts-edpm-deployment/0.log" Dec 01 15:58:13 crc kubenswrapper[4637]: I1201 15:58:13.785919 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84f489b6b7-wswv6_f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da/proxy-httpd/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.208866 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vnnqf_911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51/swift-ring-rebalance/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.224670 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84f489b6b7-wswv6_f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da/proxy-server/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.283552 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/account-auditor/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.381714 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/account-reaper/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.444650 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/account-replicator/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.473264 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/container-auditor/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.484718 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/account-server/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.589704 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/container-replicator/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.619466 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/container-server/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.696390 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/container-updater/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.726144 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-auditor/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.836018 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-expirer/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.894956 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-replicator/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.923708 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-server/0.log" Dec 01 15:58:14 crc kubenswrapper[4637]: I1201 15:58:14.998193 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-updater/0.log" Dec 01 15:58:15 crc kubenswrapper[4637]: I1201 15:58:15.017086 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/rsync/0.log" Dec 01 15:58:15 crc kubenswrapper[4637]: I1201 15:58:15.077867 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/swift-recon-cron/0.log" Dec 01 15:58:15 crc kubenswrapper[4637]: I1201 15:58:15.197028 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg_48718ab6-39c1-430f-ac3c-711d073d32f9/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:15 crc kubenswrapper[4637]: I1201 15:58:15.325233 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_151da5f8-6a6e-4d06-b6a5-de2982ed8da5/tempest-tests-tempest-tests-runner/0.log" Dec 01 15:58:15 crc kubenswrapper[4637]: I1201 15:58:15.784140 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d820189d-4832-48ee-93e1-d501b8ef91b8/test-operator-logs-container/0.log" Dec 01 15:58:15 crc kubenswrapper[4637]: I1201 15:58:15.847862 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vppnt_7a9a0491-0639-431b-bf41-812e29d6f3b4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 15:58:24 crc kubenswrapper[4637]: I1201 15:58:24.771723 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:58:24 crc kubenswrapper[4637]: E1201 15:58:24.772615 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:58:37 crc kubenswrapper[4637]: I1201 15:58:37.771431 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:58:37 crc kubenswrapper[4637]: E1201 15:58:37.772242 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:58:42 crc kubenswrapper[4637]: I1201 15:58:42.888991 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/util/0.log" Dec 01 15:58:43 crc kubenswrapper[4637]: I1201 15:58:43.856641 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/pull/0.log" Dec 01 15:58:43 crc kubenswrapper[4637]: I1201 15:58:43.872688 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/pull/0.log" Dec 01 15:58:43 crc kubenswrapper[4637]: I1201 15:58:43.940649 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/util/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.133344 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/util/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.160357 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/extract/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.193634 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/pull/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.392414 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-xm2c9_e3fe5f37-3b9c-4d1d-9890-920cfaad9b36/kube-rbac-proxy/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.440725 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-xm2c9_e3fe5f37-3b9c-4d1d-9890-920cfaad9b36/manager/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.570222 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-7t5h2_612d1951-263e-4d58-a3ab-94f8b2ddcb68/kube-rbac-proxy/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.663552 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-7t5h2_612d1951-263e-4d58-a3ab-94f8b2ddcb68/manager/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.796313 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-ngs55_e552181e-b9e1-43f4-825f-649923e52631/kube-rbac-proxy/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.922108 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-ngs55_e552181e-b9e1-43f4-825f-649923e52631/manager/0.log" Dec 01 15:58:44 crc kubenswrapper[4637]: I1201 15:58:44.967679 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bd966bbd4-4c7cm_9a6330bc-2072-40b9-a81b-00d532b6b804/kube-rbac-proxy/0.log" Dec 01 15:58:45 crc kubenswrapper[4637]: I1201 15:58:45.111638 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bd966bbd4-4c7cm_9a6330bc-2072-40b9-a81b-00d532b6b804/manager/0.log" Dec 01 15:58:45 crc kubenswrapper[4637]: I1201 15:58:45.588772 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-4qp7b_df23c8f8-8046-4e98-a46b-cc7c829981b9/kube-rbac-proxy/0.log" Dec 01 15:58:45 crc kubenswrapper[4637]: I1201 15:58:45.599665 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-4qp7b_df23c8f8-8046-4e98-a46b-cc7c829981b9/manager/0.log" Dec 01 15:58:45 crc kubenswrapper[4637]: I1201 15:58:45.832077 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-njp6w_60365f73-6418-4fdc-901b-07a2321fdcf3/kube-rbac-proxy/0.log" Dec 01 15:58:45 crc kubenswrapper[4637]: I1201 15:58:45.857179 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-njp6w_60365f73-6418-4fdc-901b-07a2321fdcf3/manager/0.log" Dec 01 15:58:45 crc kubenswrapper[4637]: I1201 15:58:45.926620 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-577c5f6d94-kx7cf_9235c8f6-6738-496d-a945-42ba5d15afd2/kube-rbac-proxy/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.209821 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-c8xnq_ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b/kube-rbac-proxy/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.230204 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-577c5f6d94-kx7cf_9235c8f6-6738-496d-a945-42ba5d15afd2/manager/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.275641 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-c8xnq_ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b/manager/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.513870 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7d6f5d799-p5htp_68dbc1ea-c95b-48b1-a4a3-542c87f531ac/manager/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.515676 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7d6f5d799-p5htp_68dbc1ea-c95b-48b1-a4a3-542c87f531ac/kube-rbac-proxy/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.682700 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-4h7xc_0c561b38-c3aa-492a-bcec-9c471c3fbf0b/kube-rbac-proxy/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.731371 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-4h7xc_0c561b38-c3aa-492a-bcec-9c471c3fbf0b/manager/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.883285 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-ng9gk_641c4df0-62e4-4b62-8f75-60e49bb56f7a/kube-rbac-proxy/0.log" Dec 01 15:58:46 crc kubenswrapper[4637]: I1201 15:58:46.988174 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-ng9gk_641c4df0-62e4-4b62-8f75-60e49bb56f7a/manager/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.111510 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6b6c55ffd5-vhj5n_1f5d18af-662c-438a-ab53-62d6c6049921/kube-rbac-proxy/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.177572 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6b6c55ffd5-vhj5n_1f5d18af-662c-438a-ab53-62d6c6049921/manager/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.226115 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-j8flc_bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5/kube-rbac-proxy/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.406944 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-j8flc_bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5/manager/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.439307 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7979c68bc7-69cgp_1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0/kube-rbac-proxy/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.563582 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7979c68bc7-69cgp_1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0/manager/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.600610 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-wbjjx_e99ec116-bc40-4275-b124-476b780bf9ca/kube-rbac-proxy/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.699297 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-wbjjx_e99ec116-bc40-4275-b124-476b780bf9ca/manager/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.776431 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6477f85467-czzlb_32635512-8e34-46b3-8285-7cdc293b15e4/kube-rbac-proxy/0.log" Dec 01 15:58:47 crc kubenswrapper[4637]: I1201 15:58:47.942159 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf66cbd54-q5fc8_2d479f26-6243-4089-9fcd-1821d05cf3f4/kube-rbac-proxy/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.187994 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf66cbd54-q5fc8_2d479f26-6243-4089-9fcd-1821d05cf3f4/operator/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.229619 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6dtxv_fa9cd1f8-d36a-4263-b944-8594a42fe15f/registry-server/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.351138 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-cnl9j_a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9/kube-rbac-proxy/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.494855 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-cnl9j_a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9/manager/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.581712 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-tpjpc_d8f49f2b-6edc-40e6-b5cf-da3e8f26009f/kube-rbac-proxy/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.671352 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-tpjpc_d8f49f2b-6edc-40e6-b5cf-da3e8f26009f/manager/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.771071 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:58:48 crc kubenswrapper[4637]: E1201 15:58:48.771592 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.901165 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw_6f0c83fd-5afa-48c8-aa05-ce507abc52c6/operator/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.930011 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6477f85467-czzlb_32635512-8e34-46b3-8285-7cdc293b15e4/manager/0.log" Dec 01 15:58:48 crc kubenswrapper[4637]: I1201 15:58:48.948717 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-cc9f5bc5c-lfzmh_4f425213-2aa4-419c-b672-22a94b28958a/kube-rbac-proxy/0.log" Dec 01 15:58:49 crc kubenswrapper[4637]: I1201 15:58:49.040435 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-cc9f5bc5c-lfzmh_4f425213-2aa4-419c-b672-22a94b28958a/manager/0.log" Dec 01 15:58:49 crc kubenswrapper[4637]: I1201 15:58:49.191279 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58487d9bf4-khr5x_75a2f55f-977e-4608-86e3-ad7cbb948420/manager/0.log" Dec 01 15:58:49 crc kubenswrapper[4637]: I1201 15:58:49.226913 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58487d9bf4-khr5x_75a2f55f-977e-4608-86e3-ad7cbb948420/kube-rbac-proxy/0.log" Dec 01 15:58:49 crc kubenswrapper[4637]: I1201 15:58:49.293555 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-j4ktr_3d463954-f36f-4cc1-9303-df4f1e7b4c0c/kube-rbac-proxy/0.log" Dec 01 15:58:49 crc kubenswrapper[4637]: I1201 15:58:49.405985 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-tm79k_df856307-2e53-4198-b26b-f7cc780f6917/kube-rbac-proxy/0.log" Dec 01 15:58:49 crc kubenswrapper[4637]: I1201 15:58:49.408082 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-j4ktr_3d463954-f36f-4cc1-9303-df4f1e7b4c0c/manager/0.log" Dec 01 15:58:49 crc kubenswrapper[4637]: I1201 15:58:49.466781 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-tm79k_df856307-2e53-4198-b26b-f7cc780f6917/manager/0.log" Dec 01 15:59:02 crc kubenswrapper[4637]: I1201 15:59:02.772042 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:59:02 crc kubenswrapper[4637]: E1201 15:59:02.773169 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:59:09 crc kubenswrapper[4637]: I1201 15:59:09.423312 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w6sz5_369bf28c-11f9-494a-8a91-a11e861d84e0/control-plane-machine-set-operator/0.log" Dec 01 15:59:09 crc kubenswrapper[4637]: I1201 15:59:09.539994 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jtkrh_6f680eac-8309-428b-9b5e-f5324aaf426a/kube-rbac-proxy/0.log" Dec 01 15:59:09 crc kubenswrapper[4637]: I1201 15:59:09.559220 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jtkrh_6f680eac-8309-428b-9b5e-f5324aaf426a/machine-api-operator/0.log" Dec 01 15:59:17 crc kubenswrapper[4637]: I1201 15:59:17.771614 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:59:17 crc kubenswrapper[4637]: E1201 15:59:17.772491 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:59:23 crc kubenswrapper[4637]: I1201 15:59:23.465849 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-8dkhl_6ada9875-197f-49ea-ae31-130a5e7a6229/cert-manager-controller/0.log" Dec 01 15:59:23 crc kubenswrapper[4637]: I1201 15:59:23.587436 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-jfx77_915faac2-3a6c-44de-8f47-a7d3c0aa2306/cert-manager-cainjector/0.log" Dec 01 15:59:23 crc kubenswrapper[4637]: I1201 15:59:23.712031 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2gbzf_96ee03cd-c317-432b-8918-7e13da710acb/cert-manager-webhook/0.log" Dec 01 15:59:32 crc kubenswrapper[4637]: I1201 15:59:32.771595 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:59:32 crc kubenswrapper[4637]: E1201 15:59:32.774073 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:59:38 crc kubenswrapper[4637]: I1201 15:59:38.031879 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-vggm6_8c3b9a86-e588-47f5-a465-45691a6808e1/nmstate-console-plugin/0.log" Dec 01 15:59:38 crc kubenswrapper[4637]: I1201 15:59:38.509365 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-tnkgd_508d135b-1c5a-49db-a896-e7489b8c9968/kube-rbac-proxy/0.log" Dec 01 15:59:38 crc kubenswrapper[4637]: I1201 15:59:38.513134 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vjj7n_7375b015-a69b-4993-abf8-6c18215144da/nmstate-handler/0.log" Dec 01 15:59:38 crc kubenswrapper[4637]: I1201 15:59:38.560042 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-tnkgd_508d135b-1c5a-49db-a896-e7489b8c9968/nmstate-metrics/0.log" Dec 01 15:59:38 crc kubenswrapper[4637]: I1201 15:59:38.751037 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-ccqrg_802cfe24-78e7-428d-89b5-04b5a610b9fb/nmstate-operator/0.log" Dec 01 15:59:38 crc kubenswrapper[4637]: I1201 15:59:38.781295 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-mz4j4_3ed53917-b528-4d89-9503-578c448fd6c7/nmstate-webhook/0.log" Dec 01 15:59:45 crc kubenswrapper[4637]: I1201 15:59:45.772350 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:59:45 crc kubenswrapper[4637]: E1201 15:59:45.773234 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 15:59:54 crc kubenswrapper[4637]: I1201 15:59:54.517895 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-nwkx8_fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d/kube-rbac-proxy/0.log" Dec 01 15:59:54 crc kubenswrapper[4637]: I1201 15:59:54.648837 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-nwkx8_fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d/controller/0.log" Dec 01 15:59:54 crc kubenswrapper[4637]: I1201 15:59:54.785444 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-frr-files/0.log" Dec 01 15:59:54 crc kubenswrapper[4637]: I1201 15:59:54.978201 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-reloader/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.013900 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-frr-files/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.029191 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-reloader/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.071053 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-metrics/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.265164 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-metrics/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.319610 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-reloader/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.327361 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-frr-files/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.349059 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-metrics/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.506006 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-frr-files/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.569115 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-metrics/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.605006 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/controller/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.614572 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-reloader/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.804034 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/frr-metrics/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.896526 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/kube-rbac-proxy/0.log" Dec 01 15:59:55 crc kubenswrapper[4637]: I1201 15:59:55.899783 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/kube-rbac-proxy-frr/0.log" Dec 01 15:59:56 crc kubenswrapper[4637]: I1201 15:59:56.070061 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/reloader/0.log" Dec 01 15:59:56 crc kubenswrapper[4637]: I1201 15:59:56.208158 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-j76nt_b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923/frr-k8s-webhook-server/0.log" Dec 01 15:59:56 crc kubenswrapper[4637]: I1201 15:59:56.450905 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8669bf5bd5-vcn5v_baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4/manager/0.log" Dec 01 15:59:57 crc kubenswrapper[4637]: I1201 15:59:57.096812 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/frr/0.log" Dec 01 15:59:57 crc kubenswrapper[4637]: I1201 15:59:57.341860 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59b9f9d896-crh4k_60286b73-70b9-46ce-8fca-28552760b79e/webhook-server/0.log" Dec 01 15:59:57 crc kubenswrapper[4637]: I1201 15:59:57.387945 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r4rsx_43335bc5-11b9-4763-bf18-efeaef24d35a/kube-rbac-proxy/0.log" Dec 01 15:59:57 crc kubenswrapper[4637]: I1201 15:59:57.804062 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r4rsx_43335bc5-11b9-4763-bf18-efeaef24d35a/speaker/0.log" Dec 01 15:59:59 crc kubenswrapper[4637]: I1201 15:59:59.780442 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 15:59:59 crc kubenswrapper[4637]: E1201 15:59:59.781132 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.166983 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2"] Dec 01 16:00:00 crc kubenswrapper[4637]: E1201 16:00:00.167489 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf93ad78-3a41-4ac3-8484-3e9daa70b538" containerName="container-00" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.167510 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf93ad78-3a41-4ac3-8484-3e9daa70b538" containerName="container-00" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.167733 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf93ad78-3a41-4ac3-8484-3e9daa70b538" containerName="container-00" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.168451 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.170799 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.175985 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.184493 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2"] Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.305949 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-config-volume\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.306355 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgf5\" (UniqueName: \"kubernetes.io/projected/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-kube-api-access-kqgf5\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.306601 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-secret-volume\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.408263 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-secret-volume\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.408588 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-config-volume\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.408779 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqgf5\" (UniqueName: \"kubernetes.io/projected/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-kube-api-access-kqgf5\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.409380 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-config-volume\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.414636 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-secret-volume\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.425485 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqgf5\" (UniqueName: \"kubernetes.io/projected/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-kube-api-access-kqgf5\") pod \"collect-profiles-29410080-sfpg2\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.492624 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:00 crc kubenswrapper[4637]: I1201 16:00:00.990454 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2"] Dec 01 16:00:01 crc kubenswrapper[4637]: I1201 16:00:01.960171 4637 generic.go:334] "Generic (PLEG): container finished" podID="7a4af3cb-2171-42a3-876d-9ebfb03d2cc4" containerID="9060d0e540969ff7fcc502db75211e21de0490a7b1db555b5db67d67782d0981" exitCode=0 Dec 01 16:00:01 crc kubenswrapper[4637]: I1201 16:00:01.960328 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" event={"ID":"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4","Type":"ContainerDied","Data":"9060d0e540969ff7fcc502db75211e21de0490a7b1db555b5db67d67782d0981"} Dec 01 16:00:01 crc kubenswrapper[4637]: I1201 16:00:01.960574 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" event={"ID":"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4","Type":"ContainerStarted","Data":"e94191e1d5e09f13ebfcc6e9722fd39a494230f7e7e7e51bb434a6a918eb63c7"} Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.334844 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.471409 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqgf5\" (UniqueName: \"kubernetes.io/projected/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-kube-api-access-kqgf5\") pod \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.471502 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-config-volume\") pod \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.471700 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-secret-volume\") pod \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\" (UID: \"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4\") " Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.472448 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a4af3cb-2171-42a3-876d-9ebfb03d2cc4" (UID: "7a4af3cb-2171-42a3-876d-9ebfb03d2cc4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.476867 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-kube-api-access-kqgf5" (OuterVolumeSpecName: "kube-api-access-kqgf5") pod "7a4af3cb-2171-42a3-876d-9ebfb03d2cc4" (UID: "7a4af3cb-2171-42a3-876d-9ebfb03d2cc4"). InnerVolumeSpecName "kube-api-access-kqgf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.486816 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a4af3cb-2171-42a3-876d-9ebfb03d2cc4" (UID: "7a4af3cb-2171-42a3-876d-9ebfb03d2cc4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.574416 4637 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.574784 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqgf5\" (UniqueName: \"kubernetes.io/projected/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-kube-api-access-kqgf5\") on node \"crc\" DevicePath \"\"" Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.574795 4637 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4af3cb-2171-42a3-876d-9ebfb03d2cc4-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.980334 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" event={"ID":"7a4af3cb-2171-42a3-876d-9ebfb03d2cc4","Type":"ContainerDied","Data":"e94191e1d5e09f13ebfcc6e9722fd39a494230f7e7e7e51bb434a6a918eb63c7"} Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.980381 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94191e1d5e09f13ebfcc6e9722fd39a494230f7e7e7e51bb434a6a918eb63c7" Dec 01 16:00:03 crc kubenswrapper[4637]: I1201 16:00:03.980416 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410080-sfpg2" Dec 01 16:00:04 crc kubenswrapper[4637]: I1201 16:00:04.420292 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l"] Dec 01 16:00:04 crc kubenswrapper[4637]: I1201 16:00:04.430350 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410035-7zj7l"] Dec 01 16:00:05 crc kubenswrapper[4637]: I1201 16:00:05.784141 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc94dbe-bca6-479b-abce-c8cea50461dc" path="/var/lib/kubelet/pods/fdc94dbe-bca6-479b-abce-c8cea50461dc/volumes" Dec 01 16:00:10 crc kubenswrapper[4637]: I1201 16:00:10.772998 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:00:10 crc kubenswrapper[4637]: E1201 16:00:10.773979 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:00:11 crc kubenswrapper[4637]: I1201 16:00:11.820442 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/util/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.007005 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/util/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.046280 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/pull/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.084155 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/pull/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.316453 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/extract/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.440907 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/util/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.470885 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/pull/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.554178 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/util/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.699064 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/util/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.734251 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/pull/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.749504 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/pull/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.975090 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/util/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.975464 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/extract/0.log" Dec 01 16:00:12 crc kubenswrapper[4637]: I1201 16:00:12.991749 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/pull/0.log" Dec 01 16:00:13 crc kubenswrapper[4637]: I1201 16:00:13.193212 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q25vd_bc13b662-0282-4e7c-bb90-90c34eb84dc6/extract-utilities/0.log" Dec 01 16:00:13 crc kubenswrapper[4637]: I1201 16:00:13.382267 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q25vd_bc13b662-0282-4e7c-bb90-90c34eb84dc6/extract-utilities/0.log" Dec 01 16:00:13 crc kubenswrapper[4637]: I1201 16:00:13.427458 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q25vd_bc13b662-0282-4e7c-bb90-90c34eb84dc6/extract-content/0.log" Dec 01 16:00:13 crc kubenswrapper[4637]: I1201 16:00:13.454322 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q25vd_bc13b662-0282-4e7c-bb90-90c34eb84dc6/extract-content/0.log" Dec 01 16:00:13 crc kubenswrapper[4637]: I1201 16:00:13.608041 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q25vd_bc13b662-0282-4e7c-bb90-90c34eb84dc6/extract-utilities/0.log" Dec 01 16:00:13 crc kubenswrapper[4637]: I1201 16:00:13.687432 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q25vd_bc13b662-0282-4e7c-bb90-90c34eb84dc6/extract-content/0.log" Dec 01 16:00:13 crc kubenswrapper[4637]: I1201 16:00:13.859033 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-utilities/0.log" Dec 01 16:00:14 crc kubenswrapper[4637]: I1201 16:00:14.057743 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-utilities/0.log" Dec 01 16:00:14 crc kubenswrapper[4637]: I1201 16:00:14.172431 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-content/0.log" Dec 01 16:00:14 crc kubenswrapper[4637]: I1201 16:00:14.172588 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-content/0.log" Dec 01 16:00:14 crc kubenswrapper[4637]: I1201 16:00:14.211779 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q25vd_bc13b662-0282-4e7c-bb90-90c34eb84dc6/registry-server/0.log" Dec 01 16:00:14 crc kubenswrapper[4637]: I1201 16:00:14.366905 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-utilities/0.log" Dec 01 16:00:14 crc kubenswrapper[4637]: I1201 16:00:14.388501 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-content/0.log" Dec 01 16:00:14 crc kubenswrapper[4637]: I1201 16:00:14.842205 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-utilities/0.log" Dec 01 16:00:14 crc kubenswrapper[4637]: I1201 16:00:14.848105 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kwzk_1c222b01-860c-4973-9a37-7abcbfdf910f/marketplace-operator/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.017654 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/registry-server/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.166841 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-content/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.198053 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-content/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.200417 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-utilities/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.399117 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-utilities/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.539826 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/registry-server/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.541091 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-content/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.642367 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9spm_23d95adc-2953-47f5-bf61-15b5eb73fe52/extract-utilities/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.866596 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9spm_23d95adc-2953-47f5-bf61-15b5eb73fe52/extract-content/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.866728 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9spm_23d95adc-2953-47f5-bf61-15b5eb73fe52/extract-utilities/0.log" Dec 01 16:00:15 crc kubenswrapper[4637]: I1201 16:00:15.933110 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9spm_23d95adc-2953-47f5-bf61-15b5eb73fe52/extract-content/0.log" Dec 01 16:00:16 crc kubenswrapper[4637]: I1201 16:00:16.101451 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9spm_23d95adc-2953-47f5-bf61-15b5eb73fe52/extract-content/0.log" Dec 01 16:00:16 crc kubenswrapper[4637]: I1201 16:00:16.106165 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9spm_23d95adc-2953-47f5-bf61-15b5eb73fe52/extract-utilities/0.log" Dec 01 16:00:16 crc kubenswrapper[4637]: I1201 16:00:16.192232 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9spm_23d95adc-2953-47f5-bf61-15b5eb73fe52/registry-server/0.log" Dec 01 16:00:23 crc kubenswrapper[4637]: I1201 16:00:23.771891 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:00:23 crc kubenswrapper[4637]: E1201 16:00:23.773157 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:00:34 crc kubenswrapper[4637]: I1201 16:00:34.771110 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:00:34 crc kubenswrapper[4637]: E1201 16:00:34.772057 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:00:35 crc kubenswrapper[4637]: I1201 16:00:35.390561 4637 scope.go:117] "RemoveContainer" containerID="2db3b0c1fa83b2dec172de69a5f7cfcc064370df53edb4031375297546939548" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.564707 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mnvwp"] Dec 01 16:00:37 crc kubenswrapper[4637]: E1201 16:00:37.565834 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4af3cb-2171-42a3-876d-9ebfb03d2cc4" containerName="collect-profiles" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.565851 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4af3cb-2171-42a3-876d-9ebfb03d2cc4" containerName="collect-profiles" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.567234 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4af3cb-2171-42a3-876d-9ebfb03d2cc4" containerName="collect-profiles" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.568674 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.583383 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnvwp"] Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.699126 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-utilities\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.699655 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2v52\" (UniqueName: \"kubernetes.io/projected/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-kube-api-access-j2v52\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.699861 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-catalog-content\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.802045 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-catalog-content\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.802115 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-utilities\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.802198 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2v52\" (UniqueName: \"kubernetes.io/projected/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-kube-api-access-j2v52\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.803001 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-catalog-content\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.803231 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-utilities\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.824469 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2v52\" (UniqueName: \"kubernetes.io/projected/07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4-kube-api-access-j2v52\") pod \"certified-operators-mnvwp\" (UID: \"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4\") " pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:37 crc kubenswrapper[4637]: I1201 16:00:37.886082 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:38 crc kubenswrapper[4637]: I1201 16:00:38.536848 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnvwp"] Dec 01 16:00:39 crc kubenswrapper[4637]: I1201 16:00:39.315774 4637 generic.go:334] "Generic (PLEG): container finished" podID="07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4" containerID="29eeb3bab833058abfb886654c69d16431d6c0b835a6ef28e20dc192fa09f2aa" exitCode=0 Dec 01 16:00:39 crc kubenswrapper[4637]: I1201 16:00:39.315830 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnvwp" event={"ID":"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4","Type":"ContainerDied","Data":"29eeb3bab833058abfb886654c69d16431d6c0b835a6ef28e20dc192fa09f2aa"} Dec 01 16:00:39 crc kubenswrapper[4637]: I1201 16:00:39.316189 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnvwp" event={"ID":"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4","Type":"ContainerStarted","Data":"602b8ee2aa187798f87f7d3c95a4f9efc5ec3a3cf0ae3ad84439c61505becb36"} Dec 01 16:00:39 crc kubenswrapper[4637]: I1201 16:00:39.318193 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 16:00:45 crc kubenswrapper[4637]: I1201 16:00:45.771275 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:00:45 crc kubenswrapper[4637]: E1201 16:00:45.773120 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:00:51 crc kubenswrapper[4637]: I1201 16:00:51.439323 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnvwp" event={"ID":"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4","Type":"ContainerStarted","Data":"747f2d53e17560ccfefbfd70d9dba6d0325b7136c339f474b2fa10ec28384e37"} Dec 01 16:00:52 crc kubenswrapper[4637]: I1201 16:00:52.450994 4637 generic.go:334] "Generic (PLEG): container finished" podID="07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4" containerID="747f2d53e17560ccfefbfd70d9dba6d0325b7136c339f474b2fa10ec28384e37" exitCode=0 Dec 01 16:00:52 crc kubenswrapper[4637]: I1201 16:00:52.451106 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnvwp" event={"ID":"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4","Type":"ContainerDied","Data":"747f2d53e17560ccfefbfd70d9dba6d0325b7136c339f474b2fa10ec28384e37"} Dec 01 16:00:54 crc kubenswrapper[4637]: I1201 16:00:54.470410 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnvwp" event={"ID":"07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4","Type":"ContainerStarted","Data":"808fba0123457af1acd13a59d76045b9d94a1c4e11a15988648eea27fd0eafa2"} Dec 01 16:00:54 crc kubenswrapper[4637]: I1201 16:00:54.503783 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mnvwp" podStartSLOduration=2.772909986 podStartE2EDuration="17.503760821s" podCreationTimestamp="2025-12-01 16:00:37 +0000 UTC" firstStartedPulling="2025-12-01 16:00:39.317967989 +0000 UTC m=+4489.835676817" lastFinishedPulling="2025-12-01 16:00:54.048818824 +0000 UTC m=+4504.566527652" observedRunningTime="2025-12-01 16:00:54.499498306 +0000 UTC m=+4505.017207134" watchObservedRunningTime="2025-12-01 16:00:54.503760821 +0000 UTC m=+4505.021469649" Dec 01 16:00:57 crc kubenswrapper[4637]: I1201 16:00:57.886248 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:57 crc kubenswrapper[4637]: I1201 16:00:57.887027 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:57 crc kubenswrapper[4637]: I1201 16:00:57.934286 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:00:58 crc kubenswrapper[4637]: I1201 16:00:58.771383 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:00:58 crc kubenswrapper[4637]: E1201 16:00:58.771982 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.664108 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zmk8g"] Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.666099 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.719301 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmk8g"] Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.769420 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fflz\" (UniqueName: \"kubernetes.io/projected/4899be8c-86c3-426c-aaec-89e76e25198d-kube-api-access-2fflz\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.769824 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-catalog-content\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.769863 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-utilities\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.874675 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fflz\" (UniqueName: \"kubernetes.io/projected/4899be8c-86c3-426c-aaec-89e76e25198d-kube-api-access-2fflz\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.878985 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-catalog-content\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.879725 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-catalog-content\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.879734 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-utilities\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.881069 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-utilities\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.909147 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fflz\" (UniqueName: \"kubernetes.io/projected/4899be8c-86c3-426c-aaec-89e76e25198d-kube-api-access-2fflz\") pod \"redhat-marketplace-zmk8g\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:00:59 crc kubenswrapper[4637]: I1201 16:00:59.995471 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.160924 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29410081-7h8rk"] Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.195029 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29410081-7h8rk"] Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.195132 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.291984 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z898\" (UniqueName: \"kubernetes.io/projected/d33387a1-c97a-4279-8b82-e50d32e48b4f-kube-api-access-5z898\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.292032 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-config-data\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.292108 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-fernet-keys\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.292194 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-combined-ca-bundle\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.393554 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z898\" (UniqueName: \"kubernetes.io/projected/d33387a1-c97a-4279-8b82-e50d32e48b4f-kube-api-access-5z898\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.393606 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-config-data\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.393684 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-fernet-keys\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.393712 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-combined-ca-bundle\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.402280 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-config-data\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.418814 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-fernet-keys\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.420277 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z898\" (UniqueName: \"kubernetes.io/projected/d33387a1-c97a-4279-8b82-e50d32e48b4f-kube-api-access-5z898\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.434915 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-combined-ca-bundle\") pod \"keystone-cron-29410081-7h8rk\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.516197 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:00 crc kubenswrapper[4637]: I1201 16:01:00.575527 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmk8g"] Dec 01 16:01:01 crc kubenswrapper[4637]: I1201 16:01:01.554101 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmk8g" event={"ID":"4899be8c-86c3-426c-aaec-89e76e25198d","Type":"ContainerStarted","Data":"59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e"} Dec 01 16:01:01 crc kubenswrapper[4637]: I1201 16:01:01.556139 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmk8g" event={"ID":"4899be8c-86c3-426c-aaec-89e76e25198d","Type":"ContainerStarted","Data":"25ea2bb030747200e4788cddf4cd7fa7d44abf02c1ba205e609bc12db400bf7b"} Dec 01 16:01:01 crc kubenswrapper[4637]: I1201 16:01:01.713217 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29410081-7h8rk"] Dec 01 16:01:02 crc kubenswrapper[4637]: E1201 16:01:02.445556 4637 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:44704->38.102.83.204:38409: write tcp 38.102.83.204:44704->38.102.83.204:38409: write: connection reset by peer Dec 01 16:01:02 crc kubenswrapper[4637]: I1201 16:01:02.565095 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29410081-7h8rk" event={"ID":"d33387a1-c97a-4279-8b82-e50d32e48b4f","Type":"ContainerStarted","Data":"695d52dd789a54953951462b12fdc399d3c9c5c881f751288a84f8b36cfaba7a"} Dec 01 16:01:02 crc kubenswrapper[4637]: I1201 16:01:02.565141 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29410081-7h8rk" event={"ID":"d33387a1-c97a-4279-8b82-e50d32e48b4f","Type":"ContainerStarted","Data":"3f9275a7a81f7189d4a422820f091db75c4c19d3e04cf0bf28408b0e4f1acb53"} Dec 01 16:01:02 crc kubenswrapper[4637]: I1201 16:01:02.568389 4637 generic.go:334] "Generic (PLEG): container finished" podID="4899be8c-86c3-426c-aaec-89e76e25198d" containerID="59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e" exitCode=0 Dec 01 16:01:02 crc kubenswrapper[4637]: I1201 16:01:02.568422 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmk8g" event={"ID":"4899be8c-86c3-426c-aaec-89e76e25198d","Type":"ContainerDied","Data":"59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e"} Dec 01 16:01:02 crc kubenswrapper[4637]: I1201 16:01:02.587279 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29410081-7h8rk" podStartSLOduration=2.587255814 podStartE2EDuration="2.587255814s" podCreationTimestamp="2025-12-01 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 16:01:02.586878774 +0000 UTC m=+4513.104587602" watchObservedRunningTime="2025-12-01 16:01:02.587255814 +0000 UTC m=+4513.104964642" Dec 01 16:01:04 crc kubenswrapper[4637]: I1201 16:01:04.589700 4637 generic.go:334] "Generic (PLEG): container finished" podID="d33387a1-c97a-4279-8b82-e50d32e48b4f" containerID="695d52dd789a54953951462b12fdc399d3c9c5c881f751288a84f8b36cfaba7a" exitCode=0 Dec 01 16:01:04 crc kubenswrapper[4637]: I1201 16:01:04.589819 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29410081-7h8rk" event={"ID":"d33387a1-c97a-4279-8b82-e50d32e48b4f","Type":"ContainerDied","Data":"695d52dd789a54953951462b12fdc399d3c9c5c881f751288a84f8b36cfaba7a"} Dec 01 16:01:05 crc kubenswrapper[4637]: I1201 16:01:05.962135 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.140963 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z898\" (UniqueName: \"kubernetes.io/projected/d33387a1-c97a-4279-8b82-e50d32e48b4f-kube-api-access-5z898\") pod \"d33387a1-c97a-4279-8b82-e50d32e48b4f\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.141227 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-fernet-keys\") pod \"d33387a1-c97a-4279-8b82-e50d32e48b4f\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.141318 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-combined-ca-bundle\") pod \"d33387a1-c97a-4279-8b82-e50d32e48b4f\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.141374 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-config-data\") pod \"d33387a1-c97a-4279-8b82-e50d32e48b4f\" (UID: \"d33387a1-c97a-4279-8b82-e50d32e48b4f\") " Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.152808 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33387a1-c97a-4279-8b82-e50d32e48b4f-kube-api-access-5z898" (OuterVolumeSpecName: "kube-api-access-5z898") pod "d33387a1-c97a-4279-8b82-e50d32e48b4f" (UID: "d33387a1-c97a-4279-8b82-e50d32e48b4f"). InnerVolumeSpecName "kube-api-access-5z898". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.152805 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d33387a1-c97a-4279-8b82-e50d32e48b4f" (UID: "d33387a1-c97a-4279-8b82-e50d32e48b4f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.175839 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d33387a1-c97a-4279-8b82-e50d32e48b4f" (UID: "d33387a1-c97a-4279-8b82-e50d32e48b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.208789 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-config-data" (OuterVolumeSpecName: "config-data") pod "d33387a1-c97a-4279-8b82-e50d32e48b4f" (UID: "d33387a1-c97a-4279-8b82-e50d32e48b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.243400 4637 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.243434 4637 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.243445 4637 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33387a1-c97a-4279-8b82-e50d32e48b4f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.243454 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z898\" (UniqueName: \"kubernetes.io/projected/d33387a1-c97a-4279-8b82-e50d32e48b4f-kube-api-access-5z898\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.611134 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29410081-7h8rk" event={"ID":"d33387a1-c97a-4279-8b82-e50d32e48b4f","Type":"ContainerDied","Data":"3f9275a7a81f7189d4a422820f091db75c4c19d3e04cf0bf28408b0e4f1acb53"} Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.611155 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29410081-7h8rk" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.611579 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f9275a7a81f7189d4a422820f091db75c4c19d3e04cf0bf28408b0e4f1acb53" Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.613917 4637 generic.go:334] "Generic (PLEG): container finished" podID="4899be8c-86c3-426c-aaec-89e76e25198d" containerID="2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d" exitCode=0 Dec 01 16:01:06 crc kubenswrapper[4637]: I1201 16:01:06.613953 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmk8g" event={"ID":"4899be8c-86c3-426c-aaec-89e76e25198d","Type":"ContainerDied","Data":"2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d"} Dec 01 16:01:07 crc kubenswrapper[4637]: I1201 16:01:07.938430 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mnvwp" Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.011369 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnvwp"] Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.094667 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q25vd"] Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.094924 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q25vd" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="registry-server" containerID="cri-o://107b541d0604dbf1954782eebd6716cf10c9f78d78ac2c74daf2159b17adfe03" gracePeriod=2 Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.654181 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmk8g" event={"ID":"4899be8c-86c3-426c-aaec-89e76e25198d","Type":"ContainerStarted","Data":"44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557"} Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.660121 4637 generic.go:334] "Generic (PLEG): container finished" podID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerID="107b541d0604dbf1954782eebd6716cf10c9f78d78ac2c74daf2159b17adfe03" exitCode=0 Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.660168 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q25vd" event={"ID":"bc13b662-0282-4e7c-bb90-90c34eb84dc6","Type":"ContainerDied","Data":"107b541d0604dbf1954782eebd6716cf10c9f78d78ac2c74daf2159b17adfe03"} Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.660488 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q25vd" event={"ID":"bc13b662-0282-4e7c-bb90-90c34eb84dc6","Type":"ContainerDied","Data":"7d7db492bbc0520e3a7be8e6aef470a392f6830ffab28c71fe3884dd9a634f59"} Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.660517 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d7db492bbc0520e3a7be8e6aef470a392f6830ffab28c71fe3884dd9a634f59" Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.687636 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zmk8g" podStartSLOduration=4.637821586 podStartE2EDuration="9.687618318s" podCreationTimestamp="2025-12-01 16:00:59 +0000 UTC" firstStartedPulling="2025-12-01 16:01:02.571244991 +0000 UTC m=+4513.088953819" lastFinishedPulling="2025-12-01 16:01:07.621041723 +0000 UTC m=+4518.138750551" observedRunningTime="2025-12-01 16:01:08.683362043 +0000 UTC m=+4519.201070871" watchObservedRunningTime="2025-12-01 16:01:08.687618318 +0000 UTC m=+4519.205327146" Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.713653 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q25vd" Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.905462 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72scw\" (UniqueName: \"kubernetes.io/projected/bc13b662-0282-4e7c-bb90-90c34eb84dc6-kube-api-access-72scw\") pod \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.905807 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-utilities\") pod \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.906287 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-utilities" (OuterVolumeSpecName: "utilities") pod "bc13b662-0282-4e7c-bb90-90c34eb84dc6" (UID: "bc13b662-0282-4e7c-bb90-90c34eb84dc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.906677 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-catalog-content\") pod \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\" (UID: \"bc13b662-0282-4e7c-bb90-90c34eb84dc6\") " Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.907612 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.912487 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc13b662-0282-4e7c-bb90-90c34eb84dc6-kube-api-access-72scw" (OuterVolumeSpecName: "kube-api-access-72scw") pod "bc13b662-0282-4e7c-bb90-90c34eb84dc6" (UID: "bc13b662-0282-4e7c-bb90-90c34eb84dc6"). InnerVolumeSpecName "kube-api-access-72scw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:01:08 crc kubenswrapper[4637]: I1201 16:01:08.975631 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc13b662-0282-4e7c-bb90-90c34eb84dc6" (UID: "bc13b662-0282-4e7c-bb90-90c34eb84dc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:01:09 crc kubenswrapper[4637]: I1201 16:01:09.010514 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc13b662-0282-4e7c-bb90-90c34eb84dc6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:09 crc kubenswrapper[4637]: I1201 16:01:09.010561 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72scw\" (UniqueName: \"kubernetes.io/projected/bc13b662-0282-4e7c-bb90-90c34eb84dc6-kube-api-access-72scw\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:09 crc kubenswrapper[4637]: I1201 16:01:09.667663 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q25vd" Dec 01 16:01:09 crc kubenswrapper[4637]: I1201 16:01:09.700668 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q25vd"] Dec 01 16:01:09 crc kubenswrapper[4637]: I1201 16:01:09.716916 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q25vd"] Dec 01 16:01:09 crc kubenswrapper[4637]: I1201 16:01:09.789661 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" path="/var/lib/kubelet/pods/bc13b662-0282-4e7c-bb90-90c34eb84dc6/volumes" Dec 01 16:01:09 crc kubenswrapper[4637]: I1201 16:01:09.995982 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:01:09 crc kubenswrapper[4637]: I1201 16:01:09.997220 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:01:10 crc kubenswrapper[4637]: I1201 16:01:10.051221 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:01:11 crc kubenswrapper[4637]: I1201 16:01:11.771892 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:01:11 crc kubenswrapper[4637]: E1201 16:01:11.772576 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:01:20 crc kubenswrapper[4637]: I1201 16:01:20.822020 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:01:20 crc kubenswrapper[4637]: I1201 16:01:20.882507 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmk8g"] Dec 01 16:01:21 crc kubenswrapper[4637]: I1201 16:01:21.787260 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zmk8g" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" containerName="registry-server" containerID="cri-o://44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557" gracePeriod=2 Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.787761 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.797602 4637 generic.go:334] "Generic (PLEG): container finished" podID="4899be8c-86c3-426c-aaec-89e76e25198d" containerID="44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557" exitCode=0 Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.797655 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmk8g" event={"ID":"4899be8c-86c3-426c-aaec-89e76e25198d","Type":"ContainerDied","Data":"44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557"} Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.797696 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmk8g" event={"ID":"4899be8c-86c3-426c-aaec-89e76e25198d","Type":"ContainerDied","Data":"25ea2bb030747200e4788cddf4cd7fa7d44abf02c1ba205e609bc12db400bf7b"} Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.797723 4637 scope.go:117] "RemoveContainer" containerID="44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.797901 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmk8g" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.825855 4637 scope.go:117] "RemoveContainer" containerID="2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.867047 4637 scope.go:117] "RemoveContainer" containerID="59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.900665 4637 scope.go:117] "RemoveContainer" containerID="44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557" Dec 01 16:01:22 crc kubenswrapper[4637]: E1201 16:01:22.901286 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557\": container with ID starting with 44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557 not found: ID does not exist" containerID="44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.901327 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557"} err="failed to get container status \"44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557\": rpc error: code = NotFound desc = could not find container \"44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557\": container with ID starting with 44eee75d7b512c689ca7808926f0cd33701f825c122557712f3ec932e9ed9557 not found: ID does not exist" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.901396 4637 scope.go:117] "RemoveContainer" containerID="2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d" Dec 01 16:01:22 crc kubenswrapper[4637]: E1201 16:01:22.901794 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d\": container with ID starting with 2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d not found: ID does not exist" containerID="2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.901858 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d"} err="failed to get container status \"2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d\": rpc error: code = NotFound desc = could not find container \"2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d\": container with ID starting with 2eeaa61a37ed3cfe766e6ded727fb41ad56866cbf6dd6d0fca3f1c776871862d not found: ID does not exist" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.901894 4637 scope.go:117] "RemoveContainer" containerID="59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e" Dec 01 16:01:22 crc kubenswrapper[4637]: E1201 16:01:22.902344 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e\": container with ID starting with 59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e not found: ID does not exist" containerID="59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.902373 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e"} err="failed to get container status \"59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e\": rpc error: code = NotFound desc = could not find container \"59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e\": container with ID starting with 59ab0f34ecbfc478e86a7a2c54ea22a3c3518f08234110c0a823fb6e98dcce9e not found: ID does not exist" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.970094 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fflz\" (UniqueName: \"kubernetes.io/projected/4899be8c-86c3-426c-aaec-89e76e25198d-kube-api-access-2fflz\") pod \"4899be8c-86c3-426c-aaec-89e76e25198d\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.970153 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-catalog-content\") pod \"4899be8c-86c3-426c-aaec-89e76e25198d\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.970254 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-utilities\") pod \"4899be8c-86c3-426c-aaec-89e76e25198d\" (UID: \"4899be8c-86c3-426c-aaec-89e76e25198d\") " Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.973948 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-utilities" (OuterVolumeSpecName: "utilities") pod "4899be8c-86c3-426c-aaec-89e76e25198d" (UID: "4899be8c-86c3-426c-aaec-89e76e25198d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.981561 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4899be8c-86c3-426c-aaec-89e76e25198d-kube-api-access-2fflz" (OuterVolumeSpecName: "kube-api-access-2fflz") pod "4899be8c-86c3-426c-aaec-89e76e25198d" (UID: "4899be8c-86c3-426c-aaec-89e76e25198d"). InnerVolumeSpecName "kube-api-access-2fflz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:01:22 crc kubenswrapper[4637]: I1201 16:01:22.994903 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4899be8c-86c3-426c-aaec-89e76e25198d" (UID: "4899be8c-86c3-426c-aaec-89e76e25198d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:01:23 crc kubenswrapper[4637]: I1201 16:01:23.072801 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fflz\" (UniqueName: \"kubernetes.io/projected/4899be8c-86c3-426c-aaec-89e76e25198d-kube-api-access-2fflz\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:23 crc kubenswrapper[4637]: I1201 16:01:23.072835 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:23 crc kubenswrapper[4637]: I1201 16:01:23.072845 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4899be8c-86c3-426c-aaec-89e76e25198d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 16:01:23 crc kubenswrapper[4637]: I1201 16:01:23.150722 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmk8g"] Dec 01 16:01:23 crc kubenswrapper[4637]: I1201 16:01:23.170055 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmk8g"] Dec 01 16:01:23 crc kubenswrapper[4637]: I1201 16:01:23.771278 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:01:23 crc kubenswrapper[4637]: E1201 16:01:23.771590 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:01:23 crc kubenswrapper[4637]: I1201 16:01:23.786244 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" path="/var/lib/kubelet/pods/4899be8c-86c3-426c-aaec-89e76e25198d/volumes" Dec 01 16:01:35 crc kubenswrapper[4637]: I1201 16:01:35.445017 4637 scope.go:117] "RemoveContainer" containerID="001288c02b5cccb22dc4254a61f127185a60201f55b001bd721238f29d998487" Dec 01 16:01:35 crc kubenswrapper[4637]: I1201 16:01:35.487295 4637 scope.go:117] "RemoveContainer" containerID="344b74bbc56af2bb60d6b612947549252fc301feb2d1b390c88604ac41649ddf" Dec 01 16:01:35 crc kubenswrapper[4637]: I1201 16:01:35.585556 4637 scope.go:117] "RemoveContainer" containerID="107b541d0604dbf1954782eebd6716cf10c9f78d78ac2c74daf2159b17adfe03" Dec 01 16:01:38 crc kubenswrapper[4637]: I1201 16:01:38.771392 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:01:38 crc kubenswrapper[4637]: E1201 16:01:38.772166 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:01:53 crc kubenswrapper[4637]: I1201 16:01:53.772042 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:01:53 crc kubenswrapper[4637]: E1201 16:01:53.772844 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:02:04 crc kubenswrapper[4637]: I1201 16:02:04.770916 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:02:04 crc kubenswrapper[4637]: E1201 16:02:04.771767 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:02:16 crc kubenswrapper[4637]: I1201 16:02:16.771817 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:02:16 crc kubenswrapper[4637]: E1201 16:02:16.772706 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:02:27 crc kubenswrapper[4637]: I1201 16:02:27.771678 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:02:27 crc kubenswrapper[4637]: E1201 16:02:27.773496 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:02:38 crc kubenswrapper[4637]: I1201 16:02:38.772776 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:02:38 crc kubenswrapper[4637]: E1201 16:02:38.773634 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:02:53 crc kubenswrapper[4637]: I1201 16:02:53.771533 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:02:54 crc kubenswrapper[4637]: I1201 16:02:54.683474 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"fb2f4d33bc8f71279f1adfc13c4437e8709d8e9307a5869b86c33a7ed64905ad"} Dec 01 16:02:56 crc kubenswrapper[4637]: I1201 16:02:56.720328 4637 generic.go:334] "Generic (PLEG): container finished" podID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerID="a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868" exitCode=0 Dec 01 16:02:56 crc kubenswrapper[4637]: I1201 16:02:56.720490 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" event={"ID":"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea","Type":"ContainerDied","Data":"a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868"} Dec 01 16:02:56 crc kubenswrapper[4637]: I1201 16:02:56.721525 4637 scope.go:117] "RemoveContainer" containerID="a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868" Dec 01 16:02:56 crc kubenswrapper[4637]: I1201 16:02:56.962180 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6cqvd_must-gather-h6dfh_d6a8c17b-c35c-4c91-80a4-db1f0b8511ea/gather/0.log" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.376003 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jncx5"] Dec 01 16:02:59 crc kubenswrapper[4637]: E1201 16:02:59.377039 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" containerName="extract-content" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377056 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" containerName="extract-content" Dec 01 16:02:59 crc kubenswrapper[4637]: E1201 16:02:59.377066 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="extract-utilities" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377073 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="extract-utilities" Dec 01 16:02:59 crc kubenswrapper[4637]: E1201 16:02:59.377096 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" containerName="extract-utilities" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377103 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" containerName="extract-utilities" Dec 01 16:02:59 crc kubenswrapper[4637]: E1201 16:02:59.377113 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33387a1-c97a-4279-8b82-e50d32e48b4f" containerName="keystone-cron" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377118 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33387a1-c97a-4279-8b82-e50d32e48b4f" containerName="keystone-cron" Dec 01 16:02:59 crc kubenswrapper[4637]: E1201 16:02:59.377136 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="extract-content" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377142 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="extract-content" Dec 01 16:02:59 crc kubenswrapper[4637]: E1201 16:02:59.377153 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="registry-server" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377159 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="registry-server" Dec 01 16:02:59 crc kubenswrapper[4637]: E1201 16:02:59.377183 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" containerName="registry-server" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377192 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" containerName="registry-server" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377384 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc13b662-0282-4e7c-bb90-90c34eb84dc6" containerName="registry-server" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377403 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="4899be8c-86c3-426c-aaec-89e76e25198d" containerName="registry-server" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.377420 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33387a1-c97a-4279-8b82-e50d32e48b4f" containerName="keystone-cron" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.378850 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.393112 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jncx5"] Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.461539 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd9x9\" (UniqueName: \"kubernetes.io/projected/a1dea67d-bd93-459f-963a-e77bf02527dd-kube-api-access-wd9x9\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.461659 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-utilities\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.461756 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-catalog-content\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.563199 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-catalog-content\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.563352 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd9x9\" (UniqueName: \"kubernetes.io/projected/a1dea67d-bd93-459f-963a-e77bf02527dd-kube-api-access-wd9x9\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.563429 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-utilities\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.563869 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-catalog-content\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.563914 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-utilities\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.588339 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd9x9\" (UniqueName: \"kubernetes.io/projected/a1dea67d-bd93-459f-963a-e77bf02527dd-kube-api-access-wd9x9\") pod \"community-operators-jncx5\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:02:59 crc kubenswrapper[4637]: I1201 16:02:59.710335 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:03:00 crc kubenswrapper[4637]: I1201 16:03:00.770596 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jncx5"] Dec 01 16:03:00 crc kubenswrapper[4637]: I1201 16:03:00.807539 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jncx5" event={"ID":"a1dea67d-bd93-459f-963a-e77bf02527dd","Type":"ContainerStarted","Data":"9e1ff571f6c38bc8c858680f297954f4eb8586e43981eea83519828c7dd0e4c4"} Dec 01 16:03:01 crc kubenswrapper[4637]: I1201 16:03:01.817842 4637 generic.go:334] "Generic (PLEG): container finished" podID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerID="0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c" exitCode=0 Dec 01 16:03:01 crc kubenswrapper[4637]: I1201 16:03:01.818553 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jncx5" event={"ID":"a1dea67d-bd93-459f-963a-e77bf02527dd","Type":"ContainerDied","Data":"0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c"} Dec 01 16:03:03 crc kubenswrapper[4637]: I1201 16:03:03.842915 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jncx5" event={"ID":"a1dea67d-bd93-459f-963a-e77bf02527dd","Type":"ContainerStarted","Data":"613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94"} Dec 01 16:03:04 crc kubenswrapper[4637]: I1201 16:03:04.858845 4637 generic.go:334] "Generic (PLEG): container finished" podID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerID="613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94" exitCode=0 Dec 01 16:03:04 crc kubenswrapper[4637]: I1201 16:03:04.858967 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jncx5" event={"ID":"a1dea67d-bd93-459f-963a-e77bf02527dd","Type":"ContainerDied","Data":"613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94"} Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.154980 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6cqvd/must-gather-h6dfh"] Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.156015 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" podUID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerName="copy" containerID="cri-o://1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8" gracePeriod=2 Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.172743 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6cqvd/must-gather-h6dfh"] Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.791235 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6cqvd_must-gather-h6dfh_d6a8c17b-c35c-4c91-80a4-db1f0b8511ea/copy/0.log" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.793545 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.857208 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzkxh\" (UniqueName: \"kubernetes.io/projected/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-kube-api-access-tzkxh\") pod \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\" (UID: \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\") " Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.857557 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-must-gather-output\") pod \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\" (UID: \"d6a8c17b-c35c-4c91-80a4-db1f0b8511ea\") " Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.873251 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-kube-api-access-tzkxh" (OuterVolumeSpecName: "kube-api-access-tzkxh") pod "d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" (UID: "d6a8c17b-c35c-4c91-80a4-db1f0b8511ea"). InnerVolumeSpecName "kube-api-access-tzkxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.884852 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jncx5" event={"ID":"a1dea67d-bd93-459f-963a-e77bf02527dd","Type":"ContainerStarted","Data":"dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4"} Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.894379 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6cqvd_must-gather-h6dfh_d6a8c17b-c35c-4c91-80a4-db1f0b8511ea/copy/0.log" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.895643 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6cqvd/must-gather-h6dfh" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.895719 4637 scope.go:117] "RemoveContainer" containerID="1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.898244 4637 generic.go:334] "Generic (PLEG): container finished" podID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerID="1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8" exitCode=143 Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.943361 4637 scope.go:117] "RemoveContainer" containerID="a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.959723 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzkxh\" (UniqueName: \"kubernetes.io/projected/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-kube-api-access-tzkxh\") on node \"crc\" DevicePath \"\"" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.992767 4637 scope.go:117] "RemoveContainer" containerID="1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8" Dec 01 16:03:06 crc kubenswrapper[4637]: E1201 16:03:06.994352 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8\": container with ID starting with 1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8 not found: ID does not exist" containerID="1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.994491 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8"} err="failed to get container status \"1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8\": rpc error: code = NotFound desc = could not find container \"1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8\": container with ID starting with 1af12d5251802f757ed06e6348032c7e48615a89054bb3aae693722e7d4e23e8 not found: ID does not exist" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.994569 4637 scope.go:117] "RemoveContainer" containerID="a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868" Dec 01 16:03:06 crc kubenswrapper[4637]: E1201 16:03:06.995041 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868\": container with ID starting with a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868 not found: ID does not exist" containerID="a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868" Dec 01 16:03:06 crc kubenswrapper[4637]: I1201 16:03:06.995101 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868"} err="failed to get container status \"a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868\": rpc error: code = NotFound desc = could not find container \"a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868\": container with ID starting with a802608f32028bf8bc6c6bf955bc3f788475314068b8bd0fbba185f2898e9868 not found: ID does not exist" Dec 01 16:03:07 crc kubenswrapper[4637]: I1201 16:03:07.087676 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" (UID: "d6a8c17b-c35c-4c91-80a4-db1f0b8511ea"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:03:07 crc kubenswrapper[4637]: I1201 16:03:07.164334 4637 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 01 16:03:07 crc kubenswrapper[4637]: I1201 16:03:07.783421 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" path="/var/lib/kubelet/pods/d6a8c17b-c35c-4c91-80a4-db1f0b8511ea/volumes" Dec 01 16:03:09 crc kubenswrapper[4637]: I1201 16:03:09.710759 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:03:09 crc kubenswrapper[4637]: I1201 16:03:09.711284 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:03:09 crc kubenswrapper[4637]: I1201 16:03:09.795916 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:03:09 crc kubenswrapper[4637]: I1201 16:03:09.842530 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jncx5" podStartSLOduration=6.737797302 podStartE2EDuration="10.842473836s" podCreationTimestamp="2025-12-01 16:02:59 +0000 UTC" firstStartedPulling="2025-12-01 16:03:01.821047583 +0000 UTC m=+4632.338756401" lastFinishedPulling="2025-12-01 16:03:05.925724107 +0000 UTC m=+4636.443432935" observedRunningTime="2025-12-01 16:03:06.92278003 +0000 UTC m=+4637.440488858" watchObservedRunningTime="2025-12-01 16:03:09.842473836 +0000 UTC m=+4640.360182654" Dec 01 16:03:19 crc kubenswrapper[4637]: I1201 16:03:19.958635 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.030976 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jncx5"] Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.041369 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jncx5" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerName="registry-server" containerID="cri-o://dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4" gracePeriod=2 Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.527121 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.703272 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-catalog-content\") pod \"a1dea67d-bd93-459f-963a-e77bf02527dd\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.703350 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd9x9\" (UniqueName: \"kubernetes.io/projected/a1dea67d-bd93-459f-963a-e77bf02527dd-kube-api-access-wd9x9\") pod \"a1dea67d-bd93-459f-963a-e77bf02527dd\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.703403 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-utilities\") pod \"a1dea67d-bd93-459f-963a-e77bf02527dd\" (UID: \"a1dea67d-bd93-459f-963a-e77bf02527dd\") " Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.704530 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-utilities" (OuterVolumeSpecName: "utilities") pod "a1dea67d-bd93-459f-963a-e77bf02527dd" (UID: "a1dea67d-bd93-459f-963a-e77bf02527dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.710623 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1dea67d-bd93-459f-963a-e77bf02527dd-kube-api-access-wd9x9" (OuterVolumeSpecName: "kube-api-access-wd9x9") pod "a1dea67d-bd93-459f-963a-e77bf02527dd" (UID: "a1dea67d-bd93-459f-963a-e77bf02527dd"). InnerVolumeSpecName "kube-api-access-wd9x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.762348 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1dea67d-bd93-459f-963a-e77bf02527dd" (UID: "a1dea67d-bd93-459f-963a-e77bf02527dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.805314 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.805343 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd9x9\" (UniqueName: \"kubernetes.io/projected/a1dea67d-bd93-459f-963a-e77bf02527dd-kube-api-access-wd9x9\") on node \"crc\" DevicePath \"\"" Dec 01 16:03:20 crc kubenswrapper[4637]: I1201 16:03:20.805355 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1dea67d-bd93-459f-963a-e77bf02527dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.050392 4637 generic.go:334] "Generic (PLEG): container finished" podID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerID="dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4" exitCode=0 Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.050682 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jncx5" event={"ID":"a1dea67d-bd93-459f-963a-e77bf02527dd","Type":"ContainerDied","Data":"dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4"} Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.050708 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jncx5" event={"ID":"a1dea67d-bd93-459f-963a-e77bf02527dd","Type":"ContainerDied","Data":"9e1ff571f6c38bc8c858680f297954f4eb8586e43981eea83519828c7dd0e4c4"} Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.050726 4637 scope.go:117] "RemoveContainer" containerID="dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.050856 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jncx5" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.071883 4637 scope.go:117] "RemoveContainer" containerID="613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.085166 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jncx5"] Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.095529 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jncx5"] Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.095768 4637 scope.go:117] "RemoveContainer" containerID="0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.134307 4637 scope.go:117] "RemoveContainer" containerID="dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4" Dec 01 16:03:21 crc kubenswrapper[4637]: E1201 16:03:21.134773 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4\": container with ID starting with dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4 not found: ID does not exist" containerID="dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.134854 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4"} err="failed to get container status \"dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4\": rpc error: code = NotFound desc = could not find container \"dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4\": container with ID starting with dee5ccddfb1b03a065bcda273bedf6b5bc7c54415990d6615f3af27410c432e4 not found: ID does not exist" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.134942 4637 scope.go:117] "RemoveContainer" containerID="613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94" Dec 01 16:03:21 crc kubenswrapper[4637]: E1201 16:03:21.135180 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94\": container with ID starting with 613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94 not found: ID does not exist" containerID="613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.135251 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94"} err="failed to get container status \"613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94\": rpc error: code = NotFound desc = could not find container \"613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94\": container with ID starting with 613bd6739aa3d46a6e89b404c8c811cfa9b6a366d1ef3ce9a92770e18ea0ea94 not found: ID does not exist" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.135317 4637 scope.go:117] "RemoveContainer" containerID="0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c" Dec 01 16:03:21 crc kubenswrapper[4637]: E1201 16:03:21.136097 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c\": container with ID starting with 0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c not found: ID does not exist" containerID="0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.136191 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c"} err="failed to get container status \"0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c\": rpc error: code = NotFound desc = could not find container \"0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c\": container with ID starting with 0ce0fc94dfe24d862113530549e6feea19d5b17f5cc21c3d44d5ac4408e6e58c not found: ID does not exist" Dec 01 16:03:21 crc kubenswrapper[4637]: I1201 16:03:21.782680 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" path="/var/lib/kubelet/pods/a1dea67d-bd93-459f-963a-e77bf02527dd/volumes" Dec 01 16:03:35 crc kubenswrapper[4637]: I1201 16:03:35.677840 4637 scope.go:117] "RemoveContainer" containerID="39be7bf1a8e834cb27430c546947d20fc11cad8b99314fa9a0b6d0e6ac452ae5" Dec 01 16:05:15 crc kubenswrapper[4637]: I1201 16:05:15.613317 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 16:05:15 crc kubenswrapper[4637]: I1201 16:05:15.615510 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 16:05:45 crc kubenswrapper[4637]: I1201 16:05:45.613640 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 16:05:45 crc kubenswrapper[4637]: I1201 16:05:45.614321 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.961548 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-whvdq/must-gather-jlzmv"] Dec 01 16:05:51 crc kubenswrapper[4637]: E1201 16:05:51.962745 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerName="extract-utilities" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.962766 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerName="extract-utilities" Dec 01 16:05:51 crc kubenswrapper[4637]: E1201 16:05:51.962796 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerName="extract-content" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.962805 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerName="extract-content" Dec 01 16:05:51 crc kubenswrapper[4637]: E1201 16:05:51.962828 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerName="copy" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.962838 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerName="copy" Dec 01 16:05:51 crc kubenswrapper[4637]: E1201 16:05:51.962861 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerName="gather" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.962869 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerName="gather" Dec 01 16:05:51 crc kubenswrapper[4637]: E1201 16:05:51.962888 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerName="registry-server" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.962895 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerName="registry-server" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.963114 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1dea67d-bd93-459f-963a-e77bf02527dd" containerName="registry-server" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.963132 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerName="copy" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.963147 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a8c17b-c35c-4c91-80a4-db1f0b8511ea" containerName="gather" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.969606 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.974267 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-whvdq/must-gather-jlzmv"] Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.977605 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-whvdq"/"openshift-service-ca.crt" Dec 01 16:05:51 crc kubenswrapper[4637]: I1201 16:05:51.977636 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-whvdq"/"default-dockercfg-6dz8b" Dec 01 16:05:52 crc kubenswrapper[4637]: I1201 16:05:52.008080 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-whvdq"/"kube-root-ca.crt" Dec 01 16:05:52 crc kubenswrapper[4637]: I1201 16:05:52.173962 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/423f70f5-df78-4323-b374-2c7a5eb93b31-must-gather-output\") pod \"must-gather-jlzmv\" (UID: \"423f70f5-df78-4323-b374-2c7a5eb93b31\") " pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:05:52 crc kubenswrapper[4637]: I1201 16:05:52.181761 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbsls\" (UniqueName: \"kubernetes.io/projected/423f70f5-df78-4323-b374-2c7a5eb93b31-kube-api-access-zbsls\") pod \"must-gather-jlzmv\" (UID: \"423f70f5-df78-4323-b374-2c7a5eb93b31\") " pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:05:52 crc kubenswrapper[4637]: I1201 16:05:52.284274 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/423f70f5-df78-4323-b374-2c7a5eb93b31-must-gather-output\") pod \"must-gather-jlzmv\" (UID: \"423f70f5-df78-4323-b374-2c7a5eb93b31\") " pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:05:52 crc kubenswrapper[4637]: I1201 16:05:52.284413 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbsls\" (UniqueName: \"kubernetes.io/projected/423f70f5-df78-4323-b374-2c7a5eb93b31-kube-api-access-zbsls\") pod \"must-gather-jlzmv\" (UID: \"423f70f5-df78-4323-b374-2c7a5eb93b31\") " pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:05:52 crc kubenswrapper[4637]: I1201 16:05:52.284747 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/423f70f5-df78-4323-b374-2c7a5eb93b31-must-gather-output\") pod \"must-gather-jlzmv\" (UID: \"423f70f5-df78-4323-b374-2c7a5eb93b31\") " pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:05:52 crc kubenswrapper[4637]: I1201 16:05:52.319115 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbsls\" (UniqueName: \"kubernetes.io/projected/423f70f5-df78-4323-b374-2c7a5eb93b31-kube-api-access-zbsls\") pod \"must-gather-jlzmv\" (UID: \"423f70f5-df78-4323-b374-2c7a5eb93b31\") " pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:05:52 crc kubenswrapper[4637]: I1201 16:05:52.589457 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.104448 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-whvdq/must-gather-jlzmv"] Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.404561 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qhcl7"] Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.407219 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.409551 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a364742-5315-43af-9e16-66e4d39282f8-catalog-content\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.410003 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjpn8\" (UniqueName: \"kubernetes.io/projected/1a364742-5315-43af-9e16-66e4d39282f8-kube-api-access-qjpn8\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.410117 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a364742-5315-43af-9e16-66e4d39282f8-utilities\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.443001 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhcl7"] Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.511952 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a364742-5315-43af-9e16-66e4d39282f8-catalog-content\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.512042 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjpn8\" (UniqueName: \"kubernetes.io/projected/1a364742-5315-43af-9e16-66e4d39282f8-kube-api-access-qjpn8\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.512090 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a364742-5315-43af-9e16-66e4d39282f8-utilities\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.514592 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a364742-5315-43af-9e16-66e4d39282f8-utilities\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.514847 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a364742-5315-43af-9e16-66e4d39282f8-catalog-content\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.556732 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjpn8\" (UniqueName: \"kubernetes.io/projected/1a364742-5315-43af-9e16-66e4d39282f8-kube-api-access-qjpn8\") pod \"redhat-operators-qhcl7\" (UID: \"1a364742-5315-43af-9e16-66e4d39282f8\") " pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.616581 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/must-gather-jlzmv" event={"ID":"423f70f5-df78-4323-b374-2c7a5eb93b31","Type":"ContainerStarted","Data":"c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3"} Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.616647 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/must-gather-jlzmv" event={"ID":"423f70f5-df78-4323-b374-2c7a5eb93b31","Type":"ContainerStarted","Data":"884c505d047b1cd25e0b2bc9e6894b01b6e8ecd0cb418a709b3e8fa588500e2d"} Dec 01 16:05:53 crc kubenswrapper[4637]: I1201 16:05:53.756373 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:05:54 crc kubenswrapper[4637]: I1201 16:05:54.283305 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhcl7"] Dec 01 16:05:54 crc kubenswrapper[4637]: I1201 16:05:54.629043 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/must-gather-jlzmv" event={"ID":"423f70f5-df78-4323-b374-2c7a5eb93b31","Type":"ContainerStarted","Data":"2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8"} Dec 01 16:05:54 crc kubenswrapper[4637]: I1201 16:05:54.632323 4637 generic.go:334] "Generic (PLEG): container finished" podID="1a364742-5315-43af-9e16-66e4d39282f8" containerID="e49bc47052273fb0e3471cf6377d33ecd2d77d87183a408fcd34224e507a11cf" exitCode=0 Dec 01 16:05:54 crc kubenswrapper[4637]: I1201 16:05:54.632405 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhcl7" event={"ID":"1a364742-5315-43af-9e16-66e4d39282f8","Type":"ContainerDied","Data":"e49bc47052273fb0e3471cf6377d33ecd2d77d87183a408fcd34224e507a11cf"} Dec 01 16:05:54 crc kubenswrapper[4637]: I1201 16:05:54.632548 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhcl7" event={"ID":"1a364742-5315-43af-9e16-66e4d39282f8","Type":"ContainerStarted","Data":"64726601604f8a57ea74e3d1f1c7d35d6f7c69959722b786b6749abcd81d358a"} Dec 01 16:05:54 crc kubenswrapper[4637]: I1201 16:05:54.634591 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 16:05:54 crc kubenswrapper[4637]: I1201 16:05:54.652743 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-whvdq/must-gather-jlzmv" podStartSLOduration=3.652721363 podStartE2EDuration="3.652721363s" podCreationTimestamp="2025-12-01 16:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 16:05:54.651668164 +0000 UTC m=+4805.169376992" watchObservedRunningTime="2025-12-01 16:05:54.652721363 +0000 UTC m=+4805.170430191" Dec 01 16:05:57 crc kubenswrapper[4637]: E1201 16:05:57.251007 4637 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:43214->38.102.83.204:38409: write tcp 38.102.83.204:43214->38.102.83.204:38409: write: broken pipe Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.306041 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-whvdq/crc-debug-2v82w"] Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.308172 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.335031 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-host\") pod \"crc-debug-2v82w\" (UID: \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\") " pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.335096 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb77f\" (UniqueName: \"kubernetes.io/projected/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-kube-api-access-tb77f\") pod \"crc-debug-2v82w\" (UID: \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\") " pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.437544 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb77f\" (UniqueName: \"kubernetes.io/projected/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-kube-api-access-tb77f\") pod \"crc-debug-2v82w\" (UID: \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\") " pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.437846 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-host\") pod \"crc-debug-2v82w\" (UID: \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\") " pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.437980 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-host\") pod \"crc-debug-2v82w\" (UID: \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\") " pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.468706 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb77f\" (UniqueName: \"kubernetes.io/projected/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-kube-api-access-tb77f\") pod \"crc-debug-2v82w\" (UID: \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\") " pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:05:58 crc kubenswrapper[4637]: I1201 16:05:58.630391 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:05:58 crc kubenswrapper[4637]: W1201 16:05:58.669243 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48f26f45_cc01_4b9c_ad90_ea8f7115eff7.slice/crio-60ab409f0fa76d8207266eb47f86531da98c181881f2172ca12a9b5385e42a4d WatchSource:0}: Error finding container 60ab409f0fa76d8207266eb47f86531da98c181881f2172ca12a9b5385e42a4d: Status 404 returned error can't find the container with id 60ab409f0fa76d8207266eb47f86531da98c181881f2172ca12a9b5385e42a4d Dec 01 16:05:59 crc kubenswrapper[4637]: I1201 16:05:59.741719 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-2v82w" event={"ID":"48f26f45-cc01-4b9c-ad90-ea8f7115eff7","Type":"ContainerStarted","Data":"be618ba34b297a9879101ed3b4cda61e98b01032b98d3f2e90d846a3a16eb639"} Dec 01 16:05:59 crc kubenswrapper[4637]: I1201 16:05:59.742465 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-2v82w" event={"ID":"48f26f45-cc01-4b9c-ad90-ea8f7115eff7","Type":"ContainerStarted","Data":"60ab409f0fa76d8207266eb47f86531da98c181881f2172ca12a9b5385e42a4d"} Dec 01 16:05:59 crc kubenswrapper[4637]: I1201 16:05:59.772302 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-whvdq/crc-debug-2v82w" podStartSLOduration=1.772281759 podStartE2EDuration="1.772281759s" podCreationTimestamp="2025-12-01 16:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 16:05:59.762289029 +0000 UTC m=+4810.279997847" watchObservedRunningTime="2025-12-01 16:05:59.772281759 +0000 UTC m=+4810.289990587" Dec 01 16:06:00 crc kubenswrapper[4637]: E1201 16:06:00.502810 4637 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:43294->38.102.83.204:38409: write tcp 38.102.83.204:43294->38.102.83.204:38409: write: broken pipe Dec 01 16:06:06 crc kubenswrapper[4637]: I1201 16:06:06.830129 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhcl7" event={"ID":"1a364742-5315-43af-9e16-66e4d39282f8","Type":"ContainerStarted","Data":"20aeab875c8ac004cf44aa3f8a1e63f409f345199b24a0f10c7270d9b85e42f7"} Dec 01 16:06:09 crc kubenswrapper[4637]: I1201 16:06:09.864861 4637 generic.go:334] "Generic (PLEG): container finished" podID="1a364742-5315-43af-9e16-66e4d39282f8" containerID="20aeab875c8ac004cf44aa3f8a1e63f409f345199b24a0f10c7270d9b85e42f7" exitCode=0 Dec 01 16:06:09 crc kubenswrapper[4637]: I1201 16:06:09.865041 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhcl7" event={"ID":"1a364742-5315-43af-9e16-66e4d39282f8","Type":"ContainerDied","Data":"20aeab875c8ac004cf44aa3f8a1e63f409f345199b24a0f10c7270d9b85e42f7"} Dec 01 16:06:10 crc kubenswrapper[4637]: I1201 16:06:10.877577 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhcl7" event={"ID":"1a364742-5315-43af-9e16-66e4d39282f8","Type":"ContainerStarted","Data":"2ba77e7dd9c8c1e80c8489e30dd19e4b0145544de6c13a62a82f302dd3b3eb76"} Dec 01 16:06:10 crc kubenswrapper[4637]: I1201 16:06:10.899066 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qhcl7" podStartSLOduration=1.894366148 podStartE2EDuration="17.899043279s" podCreationTimestamp="2025-12-01 16:05:53 +0000 UTC" firstStartedPulling="2025-12-01 16:05:54.634329855 +0000 UTC m=+4805.152038683" lastFinishedPulling="2025-12-01 16:06:10.639006986 +0000 UTC m=+4821.156715814" observedRunningTime="2025-12-01 16:06:10.893788847 +0000 UTC m=+4821.411497685" watchObservedRunningTime="2025-12-01 16:06:10.899043279 +0000 UTC m=+4821.416752117" Dec 01 16:06:13 crc kubenswrapper[4637]: I1201 16:06:13.757151 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:06:13 crc kubenswrapper[4637]: I1201 16:06:13.758140 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:06:14 crc kubenswrapper[4637]: I1201 16:06:14.819212 4637 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qhcl7" podUID="1a364742-5315-43af-9e16-66e4d39282f8" containerName="registry-server" probeResult="failure" output=< Dec 01 16:06:14 crc kubenswrapper[4637]: timeout: failed to connect service ":50051" within 1s Dec 01 16:06:14 crc kubenswrapper[4637]: > Dec 01 16:06:15 crc kubenswrapper[4637]: I1201 16:06:15.613870 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 16:06:15 crc kubenswrapper[4637]: I1201 16:06:15.614636 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 16:06:15 crc kubenswrapper[4637]: I1201 16:06:15.614696 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 16:06:15 crc kubenswrapper[4637]: I1201 16:06:15.615660 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb2f4d33bc8f71279f1adfc13c4437e8709d8e9307a5869b86c33a7ed64905ad"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 16:06:15 crc kubenswrapper[4637]: I1201 16:06:15.615732 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://fb2f4d33bc8f71279f1adfc13c4437e8709d8e9307a5869b86c33a7ed64905ad" gracePeriod=600 Dec 01 16:06:15 crc kubenswrapper[4637]: I1201 16:06:15.928778 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="fb2f4d33bc8f71279f1adfc13c4437e8709d8e9307a5869b86c33a7ed64905ad" exitCode=0 Dec 01 16:06:15 crc kubenswrapper[4637]: I1201 16:06:15.929050 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"fb2f4d33bc8f71279f1adfc13c4437e8709d8e9307a5869b86c33a7ed64905ad"} Dec 01 16:06:15 crc kubenswrapper[4637]: I1201 16:06:15.929170 4637 scope.go:117] "RemoveContainer" containerID="73e447d559e71d0a6c7e167abc4356ec442823f204c571681876329389388e2c" Dec 01 16:06:16 crc kubenswrapper[4637]: I1201 16:06:16.941241 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd"} Dec 01 16:06:23 crc kubenswrapper[4637]: I1201 16:06:23.823597 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:06:23 crc kubenswrapper[4637]: I1201 16:06:23.894325 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qhcl7" Dec 01 16:06:24 crc kubenswrapper[4637]: I1201 16:06:24.424403 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhcl7"] Dec 01 16:06:24 crc kubenswrapper[4637]: I1201 16:06:24.602240 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9spm"] Dec 01 16:06:24 crc kubenswrapper[4637]: I1201 16:06:24.602515 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9spm" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="registry-server" containerID="cri-o://cbeafdbb93fabc63743b577b21764dc0da5558e607fa2d570c4228b23500e0ec" gracePeriod=2 Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.053566 4637 generic.go:334] "Generic (PLEG): container finished" podID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerID="cbeafdbb93fabc63743b577b21764dc0da5558e607fa2d570c4228b23500e0ec" exitCode=0 Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.053663 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9spm" event={"ID":"23d95adc-2953-47f5-bf61-15b5eb73fe52","Type":"ContainerDied","Data":"cbeafdbb93fabc63743b577b21764dc0da5558e607fa2d570c4228b23500e0ec"} Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.202519 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.367242 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-catalog-content\") pod \"23d95adc-2953-47f5-bf61-15b5eb73fe52\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.367295 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phf4q\" (UniqueName: \"kubernetes.io/projected/23d95adc-2953-47f5-bf61-15b5eb73fe52-kube-api-access-phf4q\") pod \"23d95adc-2953-47f5-bf61-15b5eb73fe52\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.367531 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-utilities\") pod \"23d95adc-2953-47f5-bf61-15b5eb73fe52\" (UID: \"23d95adc-2953-47f5-bf61-15b5eb73fe52\") " Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.368110 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-utilities" (OuterVolumeSpecName: "utilities") pod "23d95adc-2953-47f5-bf61-15b5eb73fe52" (UID: "23d95adc-2953-47f5-bf61-15b5eb73fe52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.385438 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d95adc-2953-47f5-bf61-15b5eb73fe52-kube-api-access-phf4q" (OuterVolumeSpecName: "kube-api-access-phf4q") pod "23d95adc-2953-47f5-bf61-15b5eb73fe52" (UID: "23d95adc-2953-47f5-bf61-15b5eb73fe52"). InnerVolumeSpecName "kube-api-access-phf4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.470737 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phf4q\" (UniqueName: \"kubernetes.io/projected/23d95adc-2953-47f5-bf61-15b5eb73fe52-kube-api-access-phf4q\") on node \"crc\" DevicePath \"\"" Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.470772 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.521184 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23d95adc-2953-47f5-bf61-15b5eb73fe52" (UID: "23d95adc-2953-47f5-bf61-15b5eb73fe52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:06:25 crc kubenswrapper[4637]: I1201 16:06:25.571999 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d95adc-2953-47f5-bf61-15b5eb73fe52-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 16:06:26 crc kubenswrapper[4637]: I1201 16:06:26.064669 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9spm" event={"ID":"23d95adc-2953-47f5-bf61-15b5eb73fe52","Type":"ContainerDied","Data":"869f4ac65fd617d1de4e0577073cf6fa3358df6d7eb3ce52d8183c7ac33f2694"} Dec 01 16:06:26 crc kubenswrapper[4637]: I1201 16:06:26.065191 4637 scope.go:117] "RemoveContainer" containerID="cbeafdbb93fabc63743b577b21764dc0da5558e607fa2d570c4228b23500e0ec" Dec 01 16:06:26 crc kubenswrapper[4637]: I1201 16:06:26.064719 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9spm" Dec 01 16:06:26 crc kubenswrapper[4637]: I1201 16:06:26.089195 4637 scope.go:117] "RemoveContainer" containerID="98158f7dd394c95667aa76efb73a106f36f9f328036e18b94c65a6a288d4f25e" Dec 01 16:06:26 crc kubenswrapper[4637]: I1201 16:06:26.114109 4637 scope.go:117] "RemoveContainer" containerID="526aa2b7611fd87ae19098055280b07b32075faa1d70c9d3d6242551d37762ae" Dec 01 16:06:26 crc kubenswrapper[4637]: I1201 16:06:26.120897 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9spm"] Dec 01 16:06:26 crc kubenswrapper[4637]: I1201 16:06:26.150769 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9spm"] Dec 01 16:06:27 crc kubenswrapper[4637]: I1201 16:06:27.785968 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" path="/var/lib/kubelet/pods/23d95adc-2953-47f5-bf61-15b5eb73fe52/volumes" Dec 01 16:06:56 crc kubenswrapper[4637]: I1201 16:06:56.338318 4637 generic.go:334] "Generic (PLEG): container finished" podID="48f26f45-cc01-4b9c-ad90-ea8f7115eff7" containerID="be618ba34b297a9879101ed3b4cda61e98b01032b98d3f2e90d846a3a16eb639" exitCode=0 Dec 01 16:06:56 crc kubenswrapper[4637]: I1201 16:06:56.338395 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-2v82w" event={"ID":"48f26f45-cc01-4b9c-ad90-ea8f7115eff7","Type":"ContainerDied","Data":"be618ba34b297a9879101ed3b4cda61e98b01032b98d3f2e90d846a3a16eb639"} Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.484190 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.520669 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-whvdq/crc-debug-2v82w"] Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.539760 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-whvdq/crc-debug-2v82w"] Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.629505 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb77f\" (UniqueName: \"kubernetes.io/projected/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-kube-api-access-tb77f\") pod \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\" (UID: \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\") " Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.630267 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-host\") pod \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\" (UID: \"48f26f45-cc01-4b9c-ad90-ea8f7115eff7\") " Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.630339 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-host" (OuterVolumeSpecName: "host") pod "48f26f45-cc01-4b9c-ad90-ea8f7115eff7" (UID: "48f26f45-cc01-4b9c-ad90-ea8f7115eff7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.630808 4637 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-host\") on node \"crc\" DevicePath \"\"" Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.635805 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-kube-api-access-tb77f" (OuterVolumeSpecName: "kube-api-access-tb77f") pod "48f26f45-cc01-4b9c-ad90-ea8f7115eff7" (UID: "48f26f45-cc01-4b9c-ad90-ea8f7115eff7"). InnerVolumeSpecName "kube-api-access-tb77f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.733171 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb77f\" (UniqueName: \"kubernetes.io/projected/48f26f45-cc01-4b9c-ad90-ea8f7115eff7-kube-api-access-tb77f\") on node \"crc\" DevicePath \"\"" Dec 01 16:06:57 crc kubenswrapper[4637]: I1201 16:06:57.782264 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f26f45-cc01-4b9c-ad90-ea8f7115eff7" path="/var/lib/kubelet/pods/48f26f45-cc01-4b9c-ad90-ea8f7115eff7/volumes" Dec 01 16:06:58 crc kubenswrapper[4637]: E1201 16:06:58.035610 4637 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48f26f45_cc01_4b9c_ad90_ea8f7115eff7.slice\": RecentStats: unable to find data in memory cache]" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.359970 4637 scope.go:117] "RemoveContainer" containerID="be618ba34b297a9879101ed3b4cda61e98b01032b98d3f2e90d846a3a16eb639" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.360253 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-2v82w" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.993184 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-whvdq/crc-debug-6lzl6"] Dec 01 16:06:58 crc kubenswrapper[4637]: E1201 16:06:58.993630 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="extract-utilities" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.993644 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="extract-utilities" Dec 01 16:06:58 crc kubenswrapper[4637]: E1201 16:06:58.993666 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f26f45-cc01-4b9c-ad90-ea8f7115eff7" containerName="container-00" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.993674 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f26f45-cc01-4b9c-ad90-ea8f7115eff7" containerName="container-00" Dec 01 16:06:58 crc kubenswrapper[4637]: E1201 16:06:58.993701 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="registry-server" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.993707 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="registry-server" Dec 01 16:06:58 crc kubenswrapper[4637]: E1201 16:06:58.993728 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="extract-content" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.993734 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="extract-content" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.996636 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f26f45-cc01-4b9c-ad90-ea8f7115eff7" containerName="container-00" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.996680 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d95adc-2953-47f5-bf61-15b5eb73fe52" containerName="registry-server" Dec 01 16:06:58 crc kubenswrapper[4637]: I1201 16:06:58.997390 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:06:59 crc kubenswrapper[4637]: I1201 16:06:59.167975 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62g4d\" (UniqueName: \"kubernetes.io/projected/656a457a-3b63-4247-b154-9e3cc8f786a3-kube-api-access-62g4d\") pod \"crc-debug-6lzl6\" (UID: \"656a457a-3b63-4247-b154-9e3cc8f786a3\") " pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:06:59 crc kubenswrapper[4637]: I1201 16:06:59.168037 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/656a457a-3b63-4247-b154-9e3cc8f786a3-host\") pod \"crc-debug-6lzl6\" (UID: \"656a457a-3b63-4247-b154-9e3cc8f786a3\") " pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:06:59 crc kubenswrapper[4637]: I1201 16:06:59.270025 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62g4d\" (UniqueName: \"kubernetes.io/projected/656a457a-3b63-4247-b154-9e3cc8f786a3-kube-api-access-62g4d\") pod \"crc-debug-6lzl6\" (UID: \"656a457a-3b63-4247-b154-9e3cc8f786a3\") " pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:06:59 crc kubenswrapper[4637]: I1201 16:06:59.270106 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/656a457a-3b63-4247-b154-9e3cc8f786a3-host\") pod \"crc-debug-6lzl6\" (UID: \"656a457a-3b63-4247-b154-9e3cc8f786a3\") " pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:06:59 crc kubenswrapper[4637]: I1201 16:06:59.270307 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/656a457a-3b63-4247-b154-9e3cc8f786a3-host\") pod \"crc-debug-6lzl6\" (UID: \"656a457a-3b63-4247-b154-9e3cc8f786a3\") " pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:06:59 crc kubenswrapper[4637]: I1201 16:06:59.300759 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62g4d\" (UniqueName: \"kubernetes.io/projected/656a457a-3b63-4247-b154-9e3cc8f786a3-kube-api-access-62g4d\") pod \"crc-debug-6lzl6\" (UID: \"656a457a-3b63-4247-b154-9e3cc8f786a3\") " pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:06:59 crc kubenswrapper[4637]: I1201 16:06:59.314741 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:06:59 crc kubenswrapper[4637]: W1201 16:06:59.362442 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod656a457a_3b63_4247_b154_9e3cc8f786a3.slice/crio-d2eebfc20d9965a79570af8bc510ba1abe9f1169d745b77fe5e81a8654f30fc6 WatchSource:0}: Error finding container d2eebfc20d9965a79570af8bc510ba1abe9f1169d745b77fe5e81a8654f30fc6: Status 404 returned error can't find the container with id d2eebfc20d9965a79570af8bc510ba1abe9f1169d745b77fe5e81a8654f30fc6 Dec 01 16:07:00 crc kubenswrapper[4637]: I1201 16:07:00.381401 4637 generic.go:334] "Generic (PLEG): container finished" podID="656a457a-3b63-4247-b154-9e3cc8f786a3" containerID="7b643ebed751a6122230bda030df19d8d3775968cf5785f413cbe2f97a41644e" exitCode=0 Dec 01 16:07:00 crc kubenswrapper[4637]: I1201 16:07:00.381476 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-6lzl6" event={"ID":"656a457a-3b63-4247-b154-9e3cc8f786a3","Type":"ContainerDied","Data":"7b643ebed751a6122230bda030df19d8d3775968cf5785f413cbe2f97a41644e"} Dec 01 16:07:00 crc kubenswrapper[4637]: I1201 16:07:00.381863 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-6lzl6" event={"ID":"656a457a-3b63-4247-b154-9e3cc8f786a3","Type":"ContainerStarted","Data":"d2eebfc20d9965a79570af8bc510ba1abe9f1169d745b77fe5e81a8654f30fc6"} Dec 01 16:07:01 crc kubenswrapper[4637]: I1201 16:07:01.506515 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:07:01 crc kubenswrapper[4637]: I1201 16:07:01.610603 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/656a457a-3b63-4247-b154-9e3cc8f786a3-host\") pod \"656a457a-3b63-4247-b154-9e3cc8f786a3\" (UID: \"656a457a-3b63-4247-b154-9e3cc8f786a3\") " Dec 01 16:07:01 crc kubenswrapper[4637]: I1201 16:07:01.610696 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62g4d\" (UniqueName: \"kubernetes.io/projected/656a457a-3b63-4247-b154-9e3cc8f786a3-kube-api-access-62g4d\") pod \"656a457a-3b63-4247-b154-9e3cc8f786a3\" (UID: \"656a457a-3b63-4247-b154-9e3cc8f786a3\") " Dec 01 16:07:01 crc kubenswrapper[4637]: I1201 16:07:01.611107 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/656a457a-3b63-4247-b154-9e3cc8f786a3-host" (OuterVolumeSpecName: "host") pod "656a457a-3b63-4247-b154-9e3cc8f786a3" (UID: "656a457a-3b63-4247-b154-9e3cc8f786a3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 16:07:01 crc kubenswrapper[4637]: I1201 16:07:01.631175 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656a457a-3b63-4247-b154-9e3cc8f786a3-kube-api-access-62g4d" (OuterVolumeSpecName: "kube-api-access-62g4d") pod "656a457a-3b63-4247-b154-9e3cc8f786a3" (UID: "656a457a-3b63-4247-b154-9e3cc8f786a3"). InnerVolumeSpecName "kube-api-access-62g4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:07:01 crc kubenswrapper[4637]: I1201 16:07:01.713752 4637 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/656a457a-3b63-4247-b154-9e3cc8f786a3-host\") on node \"crc\" DevicePath \"\"" Dec 01 16:07:01 crc kubenswrapper[4637]: I1201 16:07:01.713787 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62g4d\" (UniqueName: \"kubernetes.io/projected/656a457a-3b63-4247-b154-9e3cc8f786a3-kube-api-access-62g4d\") on node \"crc\" DevicePath \"\"" Dec 01 16:07:02 crc kubenswrapper[4637]: I1201 16:07:02.430096 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-6lzl6" event={"ID":"656a457a-3b63-4247-b154-9e3cc8f786a3","Type":"ContainerDied","Data":"d2eebfc20d9965a79570af8bc510ba1abe9f1169d745b77fe5e81a8654f30fc6"} Dec 01 16:07:02 crc kubenswrapper[4637]: I1201 16:07:02.431607 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2eebfc20d9965a79570af8bc510ba1abe9f1169d745b77fe5e81a8654f30fc6" Dec 01 16:07:02 crc kubenswrapper[4637]: I1201 16:07:02.430519 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-6lzl6" Dec 01 16:07:02 crc kubenswrapper[4637]: I1201 16:07:02.608112 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-whvdq/crc-debug-6lzl6"] Dec 01 16:07:02 crc kubenswrapper[4637]: I1201 16:07:02.617740 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-whvdq/crc-debug-6lzl6"] Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.782421 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656a457a-3b63-4247-b154-9e3cc8f786a3" path="/var/lib/kubelet/pods/656a457a-3b63-4247-b154-9e3cc8f786a3/volumes" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.799410 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-whvdq/crc-debug-jgg65"] Dec 01 16:07:03 crc kubenswrapper[4637]: E1201 16:07:03.799833 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656a457a-3b63-4247-b154-9e3cc8f786a3" containerName="container-00" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.799852 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="656a457a-3b63-4247-b154-9e3cc8f786a3" containerName="container-00" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.800098 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="656a457a-3b63-4247-b154-9e3cc8f786a3" containerName="container-00" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.800720 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.866335 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-host\") pod \"crc-debug-jgg65\" (UID: \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\") " pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.866409 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp8w2\" (UniqueName: \"kubernetes.io/projected/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-kube-api-access-lp8w2\") pod \"crc-debug-jgg65\" (UID: \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\") " pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.967994 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-host\") pod \"crc-debug-jgg65\" (UID: \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\") " pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.968081 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp8w2\" (UniqueName: \"kubernetes.io/projected/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-kube-api-access-lp8w2\") pod \"crc-debug-jgg65\" (UID: \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\") " pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.968156 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-host\") pod \"crc-debug-jgg65\" (UID: \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\") " pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:03 crc kubenswrapper[4637]: I1201 16:07:03.988495 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp8w2\" (UniqueName: \"kubernetes.io/projected/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-kube-api-access-lp8w2\") pod \"crc-debug-jgg65\" (UID: \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\") " pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:04 crc kubenswrapper[4637]: I1201 16:07:04.120107 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:04 crc kubenswrapper[4637]: W1201 16:07:04.148645 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2b7115c_09e1_4731_9f77_1d1756a0ceb3.slice/crio-788e8492ef715f308841cec3fdc1af6fb3e2d18f11a68691cfbf434179372ed0 WatchSource:0}: Error finding container 788e8492ef715f308841cec3fdc1af6fb3e2d18f11a68691cfbf434179372ed0: Status 404 returned error can't find the container with id 788e8492ef715f308841cec3fdc1af6fb3e2d18f11a68691cfbf434179372ed0 Dec 01 16:07:04 crc kubenswrapper[4637]: I1201 16:07:04.450706 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-jgg65" event={"ID":"d2b7115c-09e1-4731-9f77-1d1756a0ceb3","Type":"ContainerStarted","Data":"261d7b22e97363f6940daaab35b507c4e48d36618a5bda4015dc43c7ee136514"} Dec 01 16:07:04 crc kubenswrapper[4637]: I1201 16:07:04.451216 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-jgg65" event={"ID":"d2b7115c-09e1-4731-9f77-1d1756a0ceb3","Type":"ContainerStarted","Data":"788e8492ef715f308841cec3fdc1af6fb3e2d18f11a68691cfbf434179372ed0"} Dec 01 16:07:04 crc kubenswrapper[4637]: I1201 16:07:04.469629 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-whvdq/crc-debug-jgg65" podStartSLOduration=1.4696003389999999 podStartE2EDuration="1.469600339s" podCreationTimestamp="2025-12-01 16:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 16:07:04.466579128 +0000 UTC m=+4874.984287956" watchObservedRunningTime="2025-12-01 16:07:04.469600339 +0000 UTC m=+4874.987309167" Dec 01 16:07:05 crc kubenswrapper[4637]: I1201 16:07:05.460106 4637 generic.go:334] "Generic (PLEG): container finished" podID="d2b7115c-09e1-4731-9f77-1d1756a0ceb3" containerID="261d7b22e97363f6940daaab35b507c4e48d36618a5bda4015dc43c7ee136514" exitCode=0 Dec 01 16:07:05 crc kubenswrapper[4637]: I1201 16:07:05.460184 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/crc-debug-jgg65" event={"ID":"d2b7115c-09e1-4731-9f77-1d1756a0ceb3","Type":"ContainerDied","Data":"261d7b22e97363f6940daaab35b507c4e48d36618a5bda4015dc43c7ee136514"} Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.567600 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.608864 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-whvdq/crc-debug-jgg65"] Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.619305 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-whvdq/crc-debug-jgg65"] Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.629266 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp8w2\" (UniqueName: \"kubernetes.io/projected/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-kube-api-access-lp8w2\") pod \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\" (UID: \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\") " Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.629558 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-host\") pod \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\" (UID: \"d2b7115c-09e1-4731-9f77-1d1756a0ceb3\") " Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.629695 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-host" (OuterVolumeSpecName: "host") pod "d2b7115c-09e1-4731-9f77-1d1756a0ceb3" (UID: "d2b7115c-09e1-4731-9f77-1d1756a0ceb3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.630308 4637 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-host\") on node \"crc\" DevicePath \"\"" Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.637117 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-kube-api-access-lp8w2" (OuterVolumeSpecName: "kube-api-access-lp8w2") pod "d2b7115c-09e1-4731-9f77-1d1756a0ceb3" (UID: "d2b7115c-09e1-4731-9f77-1d1756a0ceb3"). InnerVolumeSpecName "kube-api-access-lp8w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:07:06 crc kubenswrapper[4637]: I1201 16:07:06.732238 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp8w2\" (UniqueName: \"kubernetes.io/projected/d2b7115c-09e1-4731-9f77-1d1756a0ceb3-kube-api-access-lp8w2\") on node \"crc\" DevicePath \"\"" Dec 01 16:07:07 crc kubenswrapper[4637]: I1201 16:07:07.478305 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="788e8492ef715f308841cec3fdc1af6fb3e2d18f11a68691cfbf434179372ed0" Dec 01 16:07:07 crc kubenswrapper[4637]: I1201 16:07:07.478390 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/crc-debug-jgg65" Dec 01 16:07:07 crc kubenswrapper[4637]: I1201 16:07:07.782963 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b7115c-09e1-4731-9f77-1d1756a0ceb3" path="/var/lib/kubelet/pods/d2b7115c-09e1-4731-9f77-1d1756a0ceb3/volumes" Dec 01 16:07:41 crc kubenswrapper[4637]: I1201 16:07:41.761846 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c95959774-tk5fr_57082a3e-c5e1-4926-a5b1-306d0becae0c/barbican-api/0.log" Dec 01 16:07:41 crc kubenswrapper[4637]: I1201 16:07:41.869298 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c95959774-tk5fr_57082a3e-c5e1-4926-a5b1-306d0becae0c/barbican-api-log/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.094783 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6fcc69568b-hmqt6_4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04/barbican-keystone-listener-log/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.144395 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6fcc69568b-hmqt6_4e5bbcc3-d95d-433c-b7c3-769ba1a9ca04/barbican-keystone-listener/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.154904 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5b77c96f-nz2mk_395c12b6-6b37-4ed6-93fb-65937fa99e65/barbican-worker/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.350145 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7d5b77c96f-nz2mk_395c12b6-6b37-4ed6-93fb-65937fa99e65/barbican-worker-log/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.434396 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dlw8r_290aad22-6654-4895-ae47-8651471b42e6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.677864 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd466a3c-d503-4718-a059-1cba9c618b07/ceilometer-central-agent/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.721260 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd466a3c-d503-4718-a059-1cba9c618b07/proxy-httpd/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.745591 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd466a3c-d503-4718-a059-1cba9c618b07/ceilometer-notification-agent/0.log" Dec 01 16:07:42 crc kubenswrapper[4637]: I1201 16:07:42.786952 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd466a3c-d503-4718-a059-1cba9c618b07/sg-core/0.log" Dec 01 16:07:43 crc kubenswrapper[4637]: I1201 16:07:43.063419 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8a976758-2d0a-43be-ad2f-69b00b2fec4a/cinder-api-log/0.log" Dec 01 16:07:43 crc kubenswrapper[4637]: I1201 16:07:43.119130 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8a976758-2d0a-43be-ad2f-69b00b2fec4a/cinder-api/0.log" Dec 01 16:07:43 crc kubenswrapper[4637]: I1201 16:07:43.344600 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a03f4cde-ebc1-46dd-9218-b3f073602fba/probe/0.log" Dec 01 16:07:43 crc kubenswrapper[4637]: I1201 16:07:43.435718 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a03f4cde-ebc1-46dd-9218-b3f073602fba/cinder-scheduler/0.log" Dec 01 16:07:43 crc kubenswrapper[4637]: I1201 16:07:43.459706 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-khhg5_8152b193-b04b-4380-9596-60c61cc82ef7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:43 crc kubenswrapper[4637]: I1201 16:07:43.721637 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-prh4q_d76c43c1-7a6f-41c6-b052-5363182c236c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:43 crc kubenswrapper[4637]: I1201 16:07:43.933904 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55fb7f8d4c-mcw8w_0e879137-dbe4-4b26-a4bc-21cd963dc5e9/init/0.log" Dec 01 16:07:44 crc kubenswrapper[4637]: I1201 16:07:44.083597 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55fb7f8d4c-mcw8w_0e879137-dbe4-4b26-a4bc-21cd963dc5e9/init/0.log" Dec 01 16:07:44 crc kubenswrapper[4637]: I1201 16:07:44.360500 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55fb7f8d4c-mcw8w_0e879137-dbe4-4b26-a4bc-21cd963dc5e9/dnsmasq-dns/0.log" Dec 01 16:07:44 crc kubenswrapper[4637]: I1201 16:07:44.376183 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ph98p_0103da10-320d-4303-8498-e0f06d9e97f4/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:44 crc kubenswrapper[4637]: I1201 16:07:44.622618 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_11dea63e-843a-4a51-9525-4cda961c167a/glance-log/0.log" Dec 01 16:07:45 crc kubenswrapper[4637]: I1201 16:07:45.058403 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_00582d1a-8f52-49ad-9adc-306f07c46255/glance-httpd/0.log" Dec 01 16:07:45 crc kubenswrapper[4637]: I1201 16:07:45.081873 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_11dea63e-843a-4a51-9525-4cda961c167a/glance-httpd/0.log" Dec 01 16:07:45 crc kubenswrapper[4637]: I1201 16:07:45.205913 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_00582d1a-8f52-49ad-9adc-306f07c46255/glance-log/0.log" Dec 01 16:07:45 crc kubenswrapper[4637]: I1201 16:07:45.652893 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-fcb665488-kvv69_269bc165-8fbc-4c63-84ef-96b74d44fc16/horizon/0.log" Dec 01 16:07:45 crc kubenswrapper[4637]: I1201 16:07:45.942773 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s548v_1e434aff-123b-42e2-8c40-c82c0bd5aabe/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:46 crc kubenswrapper[4637]: I1201 16:07:46.040362 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-jmj6p_4144f6ad-f95f-4e2e-a9d3-003cdc5ef439/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:46 crc kubenswrapper[4637]: I1201 16:07:46.118130 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-fcb665488-kvv69_269bc165-8fbc-4c63-84ef-96b74d44fc16/horizon-log/0.log" Dec 01 16:07:46 crc kubenswrapper[4637]: I1201 16:07:46.409451 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29410081-7h8rk_d33387a1-c97a-4279-8b82-e50d32e48b4f/keystone-cron/0.log" Dec 01 16:07:46 crc kubenswrapper[4637]: I1201 16:07:46.608077 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0062383a-47a6-4c14-bfeb-0ea63ac93305/kube-state-metrics/0.log" Dec 01 16:07:46 crc kubenswrapper[4637]: I1201 16:07:46.831208 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f896d59db-mf67s_0e2b0a1d-1624-43e9-8f38-9918fa4b0b85/keystone-api/0.log" Dec 01 16:07:46 crc kubenswrapper[4637]: I1201 16:07:46.868581 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-g5cqp_6a16b3a0-82a0-4cc6-820a-6c084408566f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:47 crc kubenswrapper[4637]: I1201 16:07:47.867733 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bpb8m_01f54aa2-e74c-40e5-a386-da5ea69b918c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:48 crc kubenswrapper[4637]: I1201 16:07:48.017068 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-ff56c879c-9gwf6_391b6ea5-6446-4755-9075-904efff48769/neutron-httpd/0.log" Dec 01 16:07:48 crc kubenswrapper[4637]: I1201 16:07:48.321143 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-ff56c879c-9gwf6_391b6ea5-6446-4755-9075-904efff48769/neutron-api/0.log" Dec 01 16:07:49 crc kubenswrapper[4637]: I1201 16:07:49.135116 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1ca73521-4fd3-4ff2-b490-7e52488a96e4/nova-cell0-conductor-conductor/0.log" Dec 01 16:07:49 crc kubenswrapper[4637]: I1201 16:07:49.498337 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_47cc23aa-328f-4714-8407-cb7e62fa05db/nova-cell1-conductor-conductor/0.log" Dec 01 16:07:49 crc kubenswrapper[4637]: I1201 16:07:49.893686 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1f8bf954-7268-4bcf-b75b-d4d4bfa26e11/nova-api-log/0.log" Dec 01 16:07:50 crc kubenswrapper[4637]: I1201 16:07:50.032558 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_cd4d95fc-d5be-40c9-bfae-3e1afaa2722d/nova-cell1-novncproxy-novncproxy/0.log" Dec 01 16:07:50 crc kubenswrapper[4637]: I1201 16:07:50.233385 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1f8bf954-7268-4bcf-b75b-d4d4bfa26e11/nova-api-api/0.log" Dec 01 16:07:50 crc kubenswrapper[4637]: I1201 16:07:50.277480 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-md4m5_30d902b2-5e9c-4431-a436-03edbc23458d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:50 crc kubenswrapper[4637]: I1201 16:07:50.433619 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0d7cca51-3d70-47ac-b1f9-ed181a1d8826/nova-metadata-log/0.log" Dec 01 16:07:50 crc kubenswrapper[4637]: I1201 16:07:50.923239 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3cfe4e59-0e72-4440-b962-2f86664cb2d7/mysql-bootstrap/0.log" Dec 01 16:07:51 crc kubenswrapper[4637]: I1201 16:07:51.220023 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3cfe4e59-0e72-4440-b962-2f86664cb2d7/mysql-bootstrap/0.log" Dec 01 16:07:51 crc kubenswrapper[4637]: I1201 16:07:51.224324 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3cfe4e59-0e72-4440-b962-2f86664cb2d7/galera/0.log" Dec 01 16:07:51 crc kubenswrapper[4637]: I1201 16:07:51.329794 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_264d91ff-c64c-4d65-bedc-4a11945042f0/nova-scheduler-scheduler/0.log" Dec 01 16:07:51 crc kubenswrapper[4637]: I1201 16:07:51.606583 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dae9e33c-c07e-4c13-8104-d1310d91de8c/mysql-bootstrap/0.log" Dec 01 16:07:51 crc kubenswrapper[4637]: I1201 16:07:51.875046 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dae9e33c-c07e-4c13-8104-d1310d91de8c/mysql-bootstrap/0.log" Dec 01 16:07:51 crc kubenswrapper[4637]: I1201 16:07:51.919899 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dae9e33c-c07e-4c13-8104-d1310d91de8c/galera/0.log" Dec 01 16:07:52 crc kubenswrapper[4637]: I1201 16:07:52.228799 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_421d907a-c7b0-4109-8d01-e725459215b9/openstackclient/0.log" Dec 01 16:07:52 crc kubenswrapper[4637]: I1201 16:07:52.283468 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dhqng_8b8fa6a1-6ff7-4e51-a482-ad230a0cc88e/ovn-controller/0.log" Dec 01 16:07:52 crc kubenswrapper[4637]: I1201 16:07:52.559056 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f4khp_bf139f21-cf2f-4ef1-9474-9c785a02053e/openstack-network-exporter/0.log" Dec 01 16:07:52 crc kubenswrapper[4637]: I1201 16:07:52.626682 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0d7cca51-3d70-47ac-b1f9-ed181a1d8826/nova-metadata-metadata/0.log" Dec 01 16:07:52 crc kubenswrapper[4637]: I1201 16:07:52.890344 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9pxbh_e4037f80-7861-4283-99d5-2b078ef3de4b/ovsdb-server-init/0.log" Dec 01 16:07:53 crc kubenswrapper[4637]: I1201 16:07:53.240441 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5903ea37-db92-4a76-afb4-14cfa23415d0/memcached/0.log" Dec 01 16:07:53 crc kubenswrapper[4637]: I1201 16:07:53.433513 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9pxbh_e4037f80-7861-4283-99d5-2b078ef3de4b/ovs-vswitchd/0.log" Dec 01 16:07:53 crc kubenswrapper[4637]: I1201 16:07:53.527179 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9pxbh_e4037f80-7861-4283-99d5-2b078ef3de4b/ovsdb-server/0.log" Dec 01 16:07:53 crc kubenswrapper[4637]: I1201 16:07:53.536870 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9pxbh_e4037f80-7861-4283-99d5-2b078ef3de4b/ovsdb-server-init/0.log" Dec 01 16:07:53 crc kubenswrapper[4637]: I1201 16:07:53.597424 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-n9fmc_07986dae-e60d-4809-88fe-cbd86b27ef81/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:53 crc kubenswrapper[4637]: I1201 16:07:53.772004 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5348dcbd-104a-4fff-9414-bb859f58fd52/ovn-northd/0.log" Dec 01 16:07:53 crc kubenswrapper[4637]: I1201 16:07:53.859776 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5348dcbd-104a-4fff-9414-bb859f58fd52/openstack-network-exporter/0.log" Dec 01 16:07:53 crc kubenswrapper[4637]: I1201 16:07:53.874511 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfeecd83-4225-4d76-8002-6593dc66ab4f/openstack-network-exporter/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.088680 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bfeecd83-4225-4d76-8002-6593dc66ab4f/ovsdbserver-nb/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.158177 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_db3ed190-cdcc-4547-b48f-d09f6e881dfb/openstack-network-exporter/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.166815 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_db3ed190-cdcc-4547-b48f-d09f6e881dfb/ovsdbserver-sb/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.484951 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85bcc8d488-896bl_74bff823-e398-4a06-a477-d98060ddad39/placement-api/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.525632 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85bcc8d488-896bl_74bff823-e398-4a06-a477-d98060ddad39/placement-log/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.579496 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_64730b89-aa49-4741-b050-c283d98626c9/setup-container/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.713882 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_64730b89-aa49-4741-b050-c283d98626c9/rabbitmq/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.768577 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_64730b89-aa49-4741-b050-c283d98626c9/setup-container/0.log" Dec 01 16:07:54 crc kubenswrapper[4637]: I1201 16:07:54.802058 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_966262a4-bd2b-40fd-b052-ce2bd68485b5/setup-container/0.log" Dec 01 16:07:55 crc kubenswrapper[4637]: I1201 16:07:55.470430 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_966262a4-bd2b-40fd-b052-ce2bd68485b5/setup-container/0.log" Dec 01 16:07:55 crc kubenswrapper[4637]: I1201 16:07:55.501286 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2d5vq_d3b484f7-438b-4ea9-9529-9ba5a49fca84/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:55 crc kubenswrapper[4637]: I1201 16:07:55.553312 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_966262a4-bd2b-40fd-b052-ce2bd68485b5/rabbitmq/0.log" Dec 01 16:07:55 crc kubenswrapper[4637]: I1201 16:07:55.799924 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mlpn5_24361437-f549-45e9-af51-6a842e4bc82e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:55 crc kubenswrapper[4637]: I1201 16:07:55.805157 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-v7vw9_f42fd0ac-99c4-49ed-87d0-fe00a580a2ea/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:55 crc kubenswrapper[4637]: I1201 16:07:55.962327 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5ctt6_b2f971f6-6729-4d92-9849-2c03e6d0747b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:56 crc kubenswrapper[4637]: I1201 16:07:56.093534 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cmb78_ff7c15eb-bee2-412f-8689-ba46478d7b33/ssh-known-hosts-edpm-deployment/0.log" Dec 01 16:07:56 crc kubenswrapper[4637]: I1201 16:07:56.307892 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84f489b6b7-wswv6_f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da/proxy-httpd/0.log" Dec 01 16:07:56 crc kubenswrapper[4637]: I1201 16:07:56.308832 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84f489b6b7-wswv6_f00cbbd6-7a5f-4e23-9df2-8f3c2351e6da/proxy-server/0.log" Dec 01 16:07:56 crc kubenswrapper[4637]: I1201 16:07:56.398558 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vnnqf_911910d8-9f6c-4c97-b4b9-1bf0a5ae8a51/swift-ring-rebalance/0.log" Dec 01 16:07:56 crc kubenswrapper[4637]: I1201 16:07:56.505315 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/account-auditor/0.log" Dec 01 16:07:56 crc kubenswrapper[4637]: I1201 16:07:56.765565 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/account-reaper/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.202245 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/account-replicator/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.223695 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/container-server/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.241398 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/container-auditor/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.294530 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/account-server/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.319864 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/container-replicator/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.462801 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-auditor/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.473426 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-expirer/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.552370 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/container-updater/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.555060 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-server/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.628659 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-replicator/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.691206 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/object-updater/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.747741 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/rsync/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.811421 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2fd7aa8-b5cb-4c3c-976c-210541a77440/swift-recon-cron/0.log" Dec 01 16:07:57 crc kubenswrapper[4637]: I1201 16:07:57.956557 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-ks7zg_48718ab6-39c1-430f-ac3c-711d073d32f9/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:07:58 crc kubenswrapper[4637]: I1201 16:07:58.096533 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_151da5f8-6a6e-4d06-b6a5-de2982ed8da5/tempest-tests-tempest-tests-runner/0.log" Dec 01 16:07:58 crc kubenswrapper[4637]: I1201 16:07:58.205343 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d820189d-4832-48ee-93e1-d501b8ef91b8/test-operator-logs-container/0.log" Dec 01 16:07:58 crc kubenswrapper[4637]: I1201 16:07:58.300765 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vppnt_7a9a0491-0639-431b-bf41-812e29d6f3b4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 16:08:26 crc kubenswrapper[4637]: I1201 16:08:26.445738 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/util/0.log" Dec 01 16:08:26 crc kubenswrapper[4637]: I1201 16:08:26.606438 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/util/0.log" Dec 01 16:08:26 crc kubenswrapper[4637]: I1201 16:08:26.634516 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/pull/0.log" Dec 01 16:08:26 crc kubenswrapper[4637]: I1201 16:08:26.646812 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/pull/0.log" Dec 01 16:08:26 crc kubenswrapper[4637]: I1201 16:08:26.802702 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/util/0.log" Dec 01 16:08:26 crc kubenswrapper[4637]: I1201 16:08:26.869088 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/pull/0.log" Dec 01 16:08:26 crc kubenswrapper[4637]: I1201 16:08:26.903321 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1894473f3da4a23dc9208123acd52c19564a84cbfe67e094e807c656474kxlp_ae244c42-651d-4f31-9639-ba005da6ccc9/extract/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.086284 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-xm2c9_e3fe5f37-3b9c-4d1d-9890-920cfaad9b36/kube-rbac-proxy/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.145503 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-xm2c9_e3fe5f37-3b9c-4d1d-9890-920cfaad9b36/manager/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.206686 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-7t5h2_612d1951-263e-4d58-a3ab-94f8b2ddcb68/kube-rbac-proxy/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.346381 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-7t5h2_612d1951-263e-4d58-a3ab-94f8b2ddcb68/manager/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.419139 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-ngs55_e552181e-b9e1-43f4-825f-649923e52631/manager/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.450234 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-ngs55_e552181e-b9e1-43f4-825f-649923e52631/kube-rbac-proxy/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.622239 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bd966bbd4-4c7cm_9a6330bc-2072-40b9-a81b-00d532b6b804/kube-rbac-proxy/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.750112 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bd966bbd4-4c7cm_9a6330bc-2072-40b9-a81b-00d532b6b804/manager/0.log" Dec 01 16:08:27 crc kubenswrapper[4637]: I1201 16:08:27.901605 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-4qp7b_df23c8f8-8046-4e98-a46b-cc7c829981b9/manager/0.log" Dec 01 16:08:28 crc kubenswrapper[4637]: I1201 16:08:28.209710 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-4qp7b_df23c8f8-8046-4e98-a46b-cc7c829981b9/kube-rbac-proxy/0.log" Dec 01 16:08:28 crc kubenswrapper[4637]: I1201 16:08:28.351419 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-njp6w_60365f73-6418-4fdc-901b-07a2321fdcf3/kube-rbac-proxy/0.log" Dec 01 16:08:28 crc kubenswrapper[4637]: I1201 16:08:28.399657 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-njp6w_60365f73-6418-4fdc-901b-07a2321fdcf3/manager/0.log" Dec 01 16:08:28 crc kubenswrapper[4637]: I1201 16:08:28.632664 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-577c5f6d94-kx7cf_9235c8f6-6738-496d-a945-42ba5d15afd2/kube-rbac-proxy/0.log" Dec 01 16:08:28 crc kubenswrapper[4637]: I1201 16:08:28.765048 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-577c5f6d94-kx7cf_9235c8f6-6738-496d-a945-42ba5d15afd2/manager/0.log" Dec 01 16:08:28 crc kubenswrapper[4637]: I1201 16:08:28.860563 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-c8xnq_ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b/kube-rbac-proxy/0.log" Dec 01 16:08:28 crc kubenswrapper[4637]: I1201 16:08:28.981895 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-c8xnq_ee97ba6c-4f2f-4a8a-b631-ae8a77b4c35b/manager/0.log" Dec 01 16:08:29 crc kubenswrapper[4637]: I1201 16:08:29.030547 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7d6f5d799-p5htp_68dbc1ea-c95b-48b1-a4a3-542c87f531ac/kube-rbac-proxy/0.log" Dec 01 16:08:29 crc kubenswrapper[4637]: I1201 16:08:29.234063 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7d6f5d799-p5htp_68dbc1ea-c95b-48b1-a4a3-542c87f531ac/manager/0.log" Dec 01 16:08:29 crc kubenswrapper[4637]: I1201 16:08:29.299259 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-4h7xc_0c561b38-c3aa-492a-bcec-9c471c3fbf0b/kube-rbac-proxy/0.log" Dec 01 16:08:29 crc kubenswrapper[4637]: I1201 16:08:29.343417 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-4h7xc_0c561b38-c3aa-492a-bcec-9c471c3fbf0b/manager/0.log" Dec 01 16:08:29 crc kubenswrapper[4637]: I1201 16:08:29.494499 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-ng9gk_641c4df0-62e4-4b62-8f75-60e49bb56f7a/kube-rbac-proxy/0.log" Dec 01 16:08:29 crc kubenswrapper[4637]: I1201 16:08:29.629483 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-ng9gk_641c4df0-62e4-4b62-8f75-60e49bb56f7a/manager/0.log" Dec 01 16:08:29 crc kubenswrapper[4637]: I1201 16:08:29.746558 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6b6c55ffd5-vhj5n_1f5d18af-662c-438a-ab53-62d6c6049921/kube-rbac-proxy/0.log" Dec 01 16:08:29 crc kubenswrapper[4637]: I1201 16:08:29.889326 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6b6c55ffd5-vhj5n_1f5d18af-662c-438a-ab53-62d6c6049921/manager/0.log" Dec 01 16:08:30 crc kubenswrapper[4637]: I1201 16:08:30.008972 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-j8flc_bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5/kube-rbac-proxy/0.log" Dec 01 16:08:30 crc kubenswrapper[4637]: I1201 16:08:30.081960 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-j8flc_bf0d761a-bcaa-4b9d-8e16-5c478c9a90d5/manager/0.log" Dec 01 16:08:30 crc kubenswrapper[4637]: I1201 16:08:30.234462 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7979c68bc7-69cgp_1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0/kube-rbac-proxy/0.log" Dec 01 16:08:30 crc kubenswrapper[4637]: I1201 16:08:30.274390 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7979c68bc7-69cgp_1836e03a-1ea3-4a52-98e5-9e6f7e04d1b0/manager/0.log" Dec 01 16:08:30 crc kubenswrapper[4637]: I1201 16:08:30.425108 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-wbjjx_e99ec116-bc40-4275-b124-476b780bf9ca/kube-rbac-proxy/0.log" Dec 01 16:08:30 crc kubenswrapper[4637]: I1201 16:08:30.562075 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-wbjjx_e99ec116-bc40-4275-b124-476b780bf9ca/manager/0.log" Dec 01 16:08:30 crc kubenswrapper[4637]: I1201 16:08:30.604323 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6477f85467-czzlb_32635512-8e34-46b3-8285-7cdc293b15e4/kube-rbac-proxy/0.log" Dec 01 16:08:30 crc kubenswrapper[4637]: I1201 16:08:30.838220 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf66cbd54-q5fc8_2d479f26-6243-4089-9fcd-1821d05cf3f4/kube-rbac-proxy/0.log" Dec 01 16:08:31 crc kubenswrapper[4637]: I1201 16:08:31.095977 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf66cbd54-q5fc8_2d479f26-6243-4089-9fcd-1821d05cf3f4/operator/0.log" Dec 01 16:08:31 crc kubenswrapper[4637]: I1201 16:08:31.174913 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6dtxv_fa9cd1f8-d36a-4263-b944-8594a42fe15f/registry-server/0.log" Dec 01 16:08:31 crc kubenswrapper[4637]: I1201 16:08:31.519092 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-cnl9j_a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9/kube-rbac-proxy/0.log" Dec 01 16:08:31 crc kubenswrapper[4637]: I1201 16:08:31.540227 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-cnl9j_a4dea3ee-ee87-4f8b-8b76-55db9bc85fc9/manager/0.log" Dec 01 16:08:31 crc kubenswrapper[4637]: I1201 16:08:31.591833 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-tpjpc_d8f49f2b-6edc-40e6-b5cf-da3e8f26009f/kube-rbac-proxy/0.log" Dec 01 16:08:31 crc kubenswrapper[4637]: I1201 16:08:31.842534 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-tpjpc_d8f49f2b-6edc-40e6-b5cf-da3e8f26009f/manager/0.log" Dec 01 16:08:31 crc kubenswrapper[4637]: I1201 16:08:31.879947 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-hmlgw_6f0c83fd-5afa-48c8-aa05-ce507abc52c6/operator/0.log" Dec 01 16:08:31 crc kubenswrapper[4637]: I1201 16:08:31.921534 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6477f85467-czzlb_32635512-8e34-46b3-8285-7cdc293b15e4/manager/0.log" Dec 01 16:08:32 crc kubenswrapper[4637]: I1201 16:08:32.047564 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-cc9f5bc5c-lfzmh_4f425213-2aa4-419c-b672-22a94b28958a/kube-rbac-proxy/0.log" Dec 01 16:08:32 crc kubenswrapper[4637]: I1201 16:08:32.111245 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-cc9f5bc5c-lfzmh_4f425213-2aa4-419c-b672-22a94b28958a/manager/0.log" Dec 01 16:08:32 crc kubenswrapper[4637]: I1201 16:08:32.150676 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58487d9bf4-khr5x_75a2f55f-977e-4608-86e3-ad7cbb948420/kube-rbac-proxy/0.log" Dec 01 16:08:32 crc kubenswrapper[4637]: I1201 16:08:32.282951 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58487d9bf4-khr5x_75a2f55f-977e-4608-86e3-ad7cbb948420/manager/0.log" Dec 01 16:08:32 crc kubenswrapper[4637]: I1201 16:08:32.404430 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-j4ktr_3d463954-f36f-4cc1-9303-df4f1e7b4c0c/kube-rbac-proxy/0.log" Dec 01 16:08:32 crc kubenswrapper[4637]: I1201 16:08:32.489497 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-j4ktr_3d463954-f36f-4cc1-9303-df4f1e7b4c0c/manager/0.log" Dec 01 16:08:32 crc kubenswrapper[4637]: I1201 16:08:32.531562 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-tm79k_df856307-2e53-4198-b26b-f7cc780f6917/kube-rbac-proxy/0.log" Dec 01 16:08:32 crc kubenswrapper[4637]: I1201 16:08:32.561783 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-tm79k_df856307-2e53-4198-b26b-f7cc780f6917/manager/0.log" Dec 01 16:08:45 crc kubenswrapper[4637]: I1201 16:08:45.613547 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 16:08:45 crc kubenswrapper[4637]: I1201 16:08:45.614277 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 16:08:50 crc kubenswrapper[4637]: I1201 16:08:50.969491 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w6sz5_369bf28c-11f9-494a-8a91-a11e861d84e0/control-plane-machine-set-operator/0.log" Dec 01 16:08:51 crc kubenswrapper[4637]: I1201 16:08:51.231348 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jtkrh_6f680eac-8309-428b-9b5e-f5324aaf426a/kube-rbac-proxy/0.log" Dec 01 16:08:51 crc kubenswrapper[4637]: I1201 16:08:51.265646 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jtkrh_6f680eac-8309-428b-9b5e-f5324aaf426a/machine-api-operator/0.log" Dec 01 16:09:06 crc kubenswrapper[4637]: I1201 16:09:06.682053 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-8dkhl_6ada9875-197f-49ea-ae31-130a5e7a6229/cert-manager-controller/0.log" Dec 01 16:09:07 crc kubenswrapper[4637]: I1201 16:09:07.132558 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-jfx77_915faac2-3a6c-44de-8f47-a7d3c0aa2306/cert-manager-cainjector/0.log" Dec 01 16:09:07 crc kubenswrapper[4637]: I1201 16:09:07.176756 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2gbzf_96ee03cd-c317-432b-8918-7e13da710acb/cert-manager-webhook/0.log" Dec 01 16:09:15 crc kubenswrapper[4637]: I1201 16:09:15.613068 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 16:09:15 crc kubenswrapper[4637]: I1201 16:09:15.613815 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 16:09:21 crc kubenswrapper[4637]: I1201 16:09:21.772422 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-vggm6_8c3b9a86-e588-47f5-a465-45691a6808e1/nmstate-console-plugin/0.log" Dec 01 16:09:22 crc kubenswrapper[4637]: I1201 16:09:22.523270 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-tnkgd_508d135b-1c5a-49db-a896-e7489b8c9968/kube-rbac-proxy/0.log" Dec 01 16:09:22 crc kubenswrapper[4637]: I1201 16:09:22.575527 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vjj7n_7375b015-a69b-4993-abf8-6c18215144da/nmstate-handler/0.log" Dec 01 16:09:22 crc kubenswrapper[4637]: I1201 16:09:22.596447 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-tnkgd_508d135b-1c5a-49db-a896-e7489b8c9968/nmstate-metrics/0.log" Dec 01 16:09:22 crc kubenswrapper[4637]: I1201 16:09:22.850182 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-ccqrg_802cfe24-78e7-428d-89b5-04b5a610b9fb/nmstate-operator/0.log" Dec 01 16:09:22 crc kubenswrapper[4637]: I1201 16:09:22.939083 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-mz4j4_3ed53917-b528-4d89-9503-578c448fd6c7/nmstate-webhook/0.log" Dec 01 16:09:41 crc kubenswrapper[4637]: I1201 16:09:41.333804 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-nwkx8_fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d/kube-rbac-proxy/0.log" Dec 01 16:09:41 crc kubenswrapper[4637]: I1201 16:09:41.466104 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-nwkx8_fa9b7dc0-f1ec-41b0-91b9-11bef4d2e56d/controller/0.log" Dec 01 16:09:41 crc kubenswrapper[4637]: I1201 16:09:41.628320 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-frr-files/0.log" Dec 01 16:09:42 crc kubenswrapper[4637]: I1201 16:09:42.679764 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-frr-files/0.log" Dec 01 16:09:42 crc kubenswrapper[4637]: I1201 16:09:42.715716 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-reloader/0.log" Dec 01 16:09:42 crc kubenswrapper[4637]: I1201 16:09:42.731811 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-reloader/0.log" Dec 01 16:09:42 crc kubenswrapper[4637]: I1201 16:09:42.732032 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-metrics/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.094108 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-frr-files/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.139040 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-metrics/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.140650 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-reloader/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.186390 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-metrics/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.378223 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-metrics/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.381056 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-frr-files/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.428446 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/cp-reloader/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.470515 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/controller/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.641782 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/kube-rbac-proxy/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.678117 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/frr-metrics/0.log" Dec 01 16:09:43 crc kubenswrapper[4637]: I1201 16:09:43.749299 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/kube-rbac-proxy-frr/0.log" Dec 01 16:09:44 crc kubenswrapper[4637]: I1201 16:09:44.012672 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/reloader/0.log" Dec 01 16:09:44 crc kubenswrapper[4637]: I1201 16:09:44.803190 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8669bf5bd5-vcn5v_baaff01d-29a0-44c3-8b9f-c8e8e3afd1f4/manager/0.log" Dec 01 16:09:44 crc kubenswrapper[4637]: I1201 16:09:44.825837 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-j76nt_b9100ec0-5fe8-4ad1-bce7-40ca6cf5e923/frr-k8s-webhook-server/0.log" Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.045807 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxfwh_44a26d22-db8c-4b4d-a6c7-286ebd0197c5/frr/0.log" Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.076595 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59b9f9d896-crh4k_60286b73-70b9-46ce-8fca-28552760b79e/webhook-server/0.log" Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.246113 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r4rsx_43335bc5-11b9-4763-bf18-efeaef24d35a/kube-rbac-proxy/0.log" Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.614598 4637 patch_prober.go:28] interesting pod/machine-config-daemon-p7rjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.614674 4637 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.614732 4637 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.615673 4637 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd"} pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.615739 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerName="machine-config-daemon" containerID="cri-o://61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" gracePeriod=600 Dec 01 16:09:45 crc kubenswrapper[4637]: I1201 16:09:45.628147 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r4rsx_43335bc5-11b9-4763-bf18-efeaef24d35a/speaker/0.log" Dec 01 16:09:45 crc kubenswrapper[4637]: E1201 16:09:45.769222 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:09:46 crc kubenswrapper[4637]: I1201 16:09:46.073333 4637 generic.go:334] "Generic (PLEG): container finished" podID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" exitCode=0 Dec 01 16:09:46 crc kubenswrapper[4637]: I1201 16:09:46.073406 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerDied","Data":"61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd"} Dec 01 16:09:46 crc kubenswrapper[4637]: I1201 16:09:46.073466 4637 scope.go:117] "RemoveContainer" containerID="fb2f4d33bc8f71279f1adfc13c4437e8709d8e9307a5869b86c33a7ed64905ad" Dec 01 16:09:46 crc kubenswrapper[4637]: I1201 16:09:46.074542 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:09:46 crc kubenswrapper[4637]: E1201 16:09:46.074925 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:09:59 crc kubenswrapper[4637]: I1201 16:09:59.155650 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/util/0.log" Dec 01 16:09:59 crc kubenswrapper[4637]: I1201 16:09:59.478213 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/util/0.log" Dec 01 16:09:59 crc kubenswrapper[4637]: I1201 16:09:59.511938 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/pull/0.log" Dec 01 16:09:59 crc kubenswrapper[4637]: I1201 16:09:59.529077 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/pull/0.log" Dec 01 16:09:59 crc kubenswrapper[4637]: I1201 16:09:59.716288 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/util/0.log" Dec 01 16:09:59 crc kubenswrapper[4637]: I1201 16:09:59.780321 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:09:59 crc kubenswrapper[4637]: E1201 16:09:59.780725 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:10:00 crc kubenswrapper[4637]: I1201 16:10:00.596451 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/extract/0.log" Dec 01 16:10:00 crc kubenswrapper[4637]: I1201 16:10:00.639441 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fd5j6r_b474756e-aed2-462a-be8d-0ac67a276717/pull/0.log" Dec 01 16:10:00 crc kubenswrapper[4637]: I1201 16:10:00.802230 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/util/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.056346 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/util/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.087895 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/pull/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.134123 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/pull/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.322210 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/pull/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.352861 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/util/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.358722 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835mjvs_b46b721a-cd47-48db-b343-ea841d5ae9fc/extract/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.584200 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mnvwp_07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4/extract-utilities/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.797267 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mnvwp_07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4/extract-utilities/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.799909 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mnvwp_07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4/extract-content/0.log" Dec 01 16:10:01 crc kubenswrapper[4637]: I1201 16:10:01.891099 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mnvwp_07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4/extract-content/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.135897 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mnvwp_07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4/extract-content/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.190051 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mnvwp_07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4/extract-utilities/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.318017 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mnvwp_07084dcc-a9af-40a5-b0ce-2ac0bdd63ac4/registry-server/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.425288 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-utilities/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.587612 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-utilities/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.630011 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-content/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.648673 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-content/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.901577 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-content/0.log" Dec 01 16:10:02 crc kubenswrapper[4637]: I1201 16:10:02.987540 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/extract-utilities/0.log" Dec 01 16:10:03 crc kubenswrapper[4637]: I1201 16:10:03.283568 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8kwzk_1c222b01-860c-4973-9a37-7abcbfdf910f/marketplace-operator/0.log" Dec 01 16:10:03 crc kubenswrapper[4637]: I1201 16:10:03.424372 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-utilities/0.log" Dec 01 16:10:03 crc kubenswrapper[4637]: I1201 16:10:03.655354 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gftc2_d8dea35a-d142-4d51-9045-eba9f8449490/registry-server/0.log" Dec 01 16:10:03 crc kubenswrapper[4637]: I1201 16:10:03.690726 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-utilities/0.log" Dec 01 16:10:03 crc kubenswrapper[4637]: I1201 16:10:03.707366 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-content/0.log" Dec 01 16:10:03 crc kubenswrapper[4637]: I1201 16:10:03.748475 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-content/0.log" Dec 01 16:10:03 crc kubenswrapper[4637]: I1201 16:10:03.964022 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-content/0.log" Dec 01 16:10:03 crc kubenswrapper[4637]: I1201 16:10:03.994008 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/extract-utilities/0.log" Dec 01 16:10:04 crc kubenswrapper[4637]: I1201 16:10:04.202470 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-df4gw_2a407577-a89c-4bd1-9e97-0140f2ea2c40/registry-server/0.log" Dec 01 16:10:04 crc kubenswrapper[4637]: I1201 16:10:04.263191 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhcl7_1a364742-5315-43af-9e16-66e4d39282f8/extract-utilities/0.log" Dec 01 16:10:04 crc kubenswrapper[4637]: I1201 16:10:04.484258 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhcl7_1a364742-5315-43af-9e16-66e4d39282f8/extract-content/0.log" Dec 01 16:10:04 crc kubenswrapper[4637]: I1201 16:10:04.526917 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhcl7_1a364742-5315-43af-9e16-66e4d39282f8/extract-content/0.log" Dec 01 16:10:04 crc kubenswrapper[4637]: I1201 16:10:04.531280 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhcl7_1a364742-5315-43af-9e16-66e4d39282f8/extract-utilities/0.log" Dec 01 16:10:04 crc kubenswrapper[4637]: I1201 16:10:04.716718 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhcl7_1a364742-5315-43af-9e16-66e4d39282f8/extract-content/0.log" Dec 01 16:10:04 crc kubenswrapper[4637]: I1201 16:10:04.779370 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhcl7_1a364742-5315-43af-9e16-66e4d39282f8/extract-utilities/0.log" Dec 01 16:10:04 crc kubenswrapper[4637]: I1201 16:10:04.844721 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhcl7_1a364742-5315-43af-9e16-66e4d39282f8/registry-server/0.log" Dec 01 16:10:11 crc kubenswrapper[4637]: I1201 16:10:11.771710 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:10:11 crc kubenswrapper[4637]: E1201 16:10:11.772574 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:10:23 crc kubenswrapper[4637]: I1201 16:10:23.771987 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:10:23 crc kubenswrapper[4637]: E1201 16:10:23.773297 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:10:37 crc kubenswrapper[4637]: I1201 16:10:37.772103 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:10:37 crc kubenswrapper[4637]: E1201 16:10:37.773185 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:10:52 crc kubenswrapper[4637]: I1201 16:10:52.771708 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:10:52 crc kubenswrapper[4637]: E1201 16:10:52.773140 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:11:06 crc kubenswrapper[4637]: I1201 16:11:06.771912 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:11:06 crc kubenswrapper[4637]: E1201 16:11:06.772810 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:11:18 crc kubenswrapper[4637]: I1201 16:11:18.772391 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:11:18 crc kubenswrapper[4637]: E1201 16:11:18.776158 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:11:33 crc kubenswrapper[4637]: I1201 16:11:33.771223 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:11:33 crc kubenswrapper[4637]: E1201 16:11:33.773064 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.746030 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lk95q"] Dec 01 16:11:36 crc kubenswrapper[4637]: E1201 16:11:36.746964 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b7115c-09e1-4731-9f77-1d1756a0ceb3" containerName="container-00" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.746983 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b7115c-09e1-4731-9f77-1d1756a0ceb3" containerName="container-00" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.747264 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b7115c-09e1-4731-9f77-1d1756a0ceb3" containerName="container-00" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.749079 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.761168 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lk95q"] Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.866738 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jpj4\" (UniqueName: \"kubernetes.io/projected/50cdd34b-8460-4793-8753-a34ee4c0f0c5-kube-api-access-8jpj4\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.866883 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-utilities\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.866914 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-catalog-content\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.969840 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jpj4\" (UniqueName: \"kubernetes.io/projected/50cdd34b-8460-4793-8753-a34ee4c0f0c5-kube-api-access-8jpj4\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.969953 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-utilities\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.969991 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-catalog-content\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.970617 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-catalog-content\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:36 crc kubenswrapper[4637]: I1201 16:11:36.970742 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-utilities\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:37 crc kubenswrapper[4637]: I1201 16:11:36.999201 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jpj4\" (UniqueName: \"kubernetes.io/projected/50cdd34b-8460-4793-8753-a34ee4c0f0c5-kube-api-access-8jpj4\") pod \"certified-operators-lk95q\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:37 crc kubenswrapper[4637]: I1201 16:11:37.072279 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:37 crc kubenswrapper[4637]: I1201 16:11:37.675604 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lk95q"] Dec 01 16:11:38 crc kubenswrapper[4637]: I1201 16:11:38.559379 4637 generic.go:334] "Generic (PLEG): container finished" podID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerID="63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b" exitCode=0 Dec 01 16:11:38 crc kubenswrapper[4637]: I1201 16:11:38.559470 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk95q" event={"ID":"50cdd34b-8460-4793-8753-a34ee4c0f0c5","Type":"ContainerDied","Data":"63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b"} Dec 01 16:11:38 crc kubenswrapper[4637]: I1201 16:11:38.559802 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk95q" event={"ID":"50cdd34b-8460-4793-8753-a34ee4c0f0c5","Type":"ContainerStarted","Data":"048c031a66b85154ccc3048df8394b085a4937eed8736bdd7c472ef49f70a20a"} Dec 01 16:11:38 crc kubenswrapper[4637]: I1201 16:11:38.563244 4637 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 16:11:40 crc kubenswrapper[4637]: I1201 16:11:40.584428 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk95q" event={"ID":"50cdd34b-8460-4793-8753-a34ee4c0f0c5","Type":"ContainerStarted","Data":"3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d"} Dec 01 16:11:40 crc kubenswrapper[4637]: I1201 16:11:40.951616 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-64whm"] Dec 01 16:11:40 crc kubenswrapper[4637]: I1201 16:11:40.957077 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:40 crc kubenswrapper[4637]: I1201 16:11:40.969516 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64whm"] Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.050900 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sljdk\" (UniqueName: \"kubernetes.io/projected/e506e258-4906-40ee-830e-ef41f79f0dfa-kube-api-access-sljdk\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.051038 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-catalog-content\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.051103 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-utilities\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.153716 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-catalog-content\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.153810 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-utilities\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.153978 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sljdk\" (UniqueName: \"kubernetes.io/projected/e506e258-4906-40ee-830e-ef41f79f0dfa-kube-api-access-sljdk\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.154243 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-catalog-content\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.154266 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-utilities\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.176805 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sljdk\" (UniqueName: \"kubernetes.io/projected/e506e258-4906-40ee-830e-ef41f79f0dfa-kube-api-access-sljdk\") pod \"redhat-marketplace-64whm\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.277769 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.611390 4637 generic.go:334] "Generic (PLEG): container finished" podID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerID="3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d" exitCode=0 Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.613382 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk95q" event={"ID":"50cdd34b-8460-4793-8753-a34ee4c0f0c5","Type":"ContainerDied","Data":"3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d"} Dec 01 16:11:41 crc kubenswrapper[4637]: I1201 16:11:41.951826 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64whm"] Dec 01 16:11:41 crc kubenswrapper[4637]: W1201 16:11:41.954180 4637 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode506e258_4906_40ee_830e_ef41f79f0dfa.slice/crio-77ec140503a11256f7eac2a034676e6106d9a2c9f414c869f064c5cd7c78e2d9 WatchSource:0}: Error finding container 77ec140503a11256f7eac2a034676e6106d9a2c9f414c869f064c5cd7c78e2d9: Status 404 returned error can't find the container with id 77ec140503a11256f7eac2a034676e6106d9a2c9f414c869f064c5cd7c78e2d9 Dec 01 16:11:42 crc kubenswrapper[4637]: I1201 16:11:42.624511 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64whm" event={"ID":"e506e258-4906-40ee-830e-ef41f79f0dfa","Type":"ContainerStarted","Data":"6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64"} Dec 01 16:11:42 crc kubenswrapper[4637]: I1201 16:11:42.624999 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64whm" event={"ID":"e506e258-4906-40ee-830e-ef41f79f0dfa","Type":"ContainerStarted","Data":"77ec140503a11256f7eac2a034676e6106d9a2c9f414c869f064c5cd7c78e2d9"} Dec 01 16:11:43 crc kubenswrapper[4637]: I1201 16:11:43.635286 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk95q" event={"ID":"50cdd34b-8460-4793-8753-a34ee4c0f0c5","Type":"ContainerStarted","Data":"58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68"} Dec 01 16:11:43 crc kubenswrapper[4637]: I1201 16:11:43.639211 4637 generic.go:334] "Generic (PLEG): container finished" podID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerID="6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64" exitCode=0 Dec 01 16:11:43 crc kubenswrapper[4637]: I1201 16:11:43.639311 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64whm" event={"ID":"e506e258-4906-40ee-830e-ef41f79f0dfa","Type":"ContainerDied","Data":"6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64"} Dec 01 16:11:43 crc kubenswrapper[4637]: I1201 16:11:43.668283 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lk95q" podStartSLOduration=4.018983557 podStartE2EDuration="7.6682619s" podCreationTimestamp="2025-12-01 16:11:36 +0000 UTC" firstStartedPulling="2025-12-01 16:11:38.562841086 +0000 UTC m=+5149.080549914" lastFinishedPulling="2025-12-01 16:11:42.212119429 +0000 UTC m=+5152.729828257" observedRunningTime="2025-12-01 16:11:43.662122943 +0000 UTC m=+5154.179831761" watchObservedRunningTime="2025-12-01 16:11:43.6682619 +0000 UTC m=+5154.185970748" Dec 01 16:11:45 crc kubenswrapper[4637]: I1201 16:11:45.679428 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64whm" event={"ID":"e506e258-4906-40ee-830e-ef41f79f0dfa","Type":"ContainerStarted","Data":"3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3"} Dec 01 16:11:45 crc kubenswrapper[4637]: I1201 16:11:45.771545 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:11:45 crc kubenswrapper[4637]: E1201 16:11:45.771895 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:11:46 crc kubenswrapper[4637]: I1201 16:11:46.701111 4637 generic.go:334] "Generic (PLEG): container finished" podID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerID="3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3" exitCode=0 Dec 01 16:11:46 crc kubenswrapper[4637]: I1201 16:11:46.701406 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64whm" event={"ID":"e506e258-4906-40ee-830e-ef41f79f0dfa","Type":"ContainerDied","Data":"3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3"} Dec 01 16:11:47 crc kubenswrapper[4637]: I1201 16:11:47.073411 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:47 crc kubenswrapper[4637]: I1201 16:11:47.073722 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:47 crc kubenswrapper[4637]: I1201 16:11:47.128707 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:47 crc kubenswrapper[4637]: I1201 16:11:47.725885 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64whm" event={"ID":"e506e258-4906-40ee-830e-ef41f79f0dfa","Type":"ContainerStarted","Data":"21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622"} Dec 01 16:11:47 crc kubenswrapper[4637]: I1201 16:11:47.762518 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-64whm" podStartSLOduration=4.197844702 podStartE2EDuration="7.762494817s" podCreationTimestamp="2025-12-01 16:11:40 +0000 UTC" firstStartedPulling="2025-12-01 16:11:43.640993862 +0000 UTC m=+5154.158702690" lastFinishedPulling="2025-12-01 16:11:47.205643977 +0000 UTC m=+5157.723352805" observedRunningTime="2025-12-01 16:11:47.752427215 +0000 UTC m=+5158.270136043" watchObservedRunningTime="2025-12-01 16:11:47.762494817 +0000 UTC m=+5158.280203635" Dec 01 16:11:47 crc kubenswrapper[4637]: I1201 16:11:47.793409 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:49 crc kubenswrapper[4637]: I1201 16:11:49.535449 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lk95q"] Dec 01 16:11:49 crc kubenswrapper[4637]: I1201 16:11:49.745131 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lk95q" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerName="registry-server" containerID="cri-o://58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68" gracePeriod=2 Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.280660 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.373843 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-utilities\") pod \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.373989 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jpj4\" (UniqueName: \"kubernetes.io/projected/50cdd34b-8460-4793-8753-a34ee4c0f0c5-kube-api-access-8jpj4\") pod \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.374251 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-catalog-content\") pod \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\" (UID: \"50cdd34b-8460-4793-8753-a34ee4c0f0c5\") " Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.374770 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-utilities" (OuterVolumeSpecName: "utilities") pod "50cdd34b-8460-4793-8753-a34ee4c0f0c5" (UID: "50cdd34b-8460-4793-8753-a34ee4c0f0c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.381998 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cdd34b-8460-4793-8753-a34ee4c0f0c5-kube-api-access-8jpj4" (OuterVolumeSpecName: "kube-api-access-8jpj4") pod "50cdd34b-8460-4793-8753-a34ee4c0f0c5" (UID: "50cdd34b-8460-4793-8753-a34ee4c0f0c5"). InnerVolumeSpecName "kube-api-access-8jpj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.477467 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.477505 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jpj4\" (UniqueName: \"kubernetes.io/projected/50cdd34b-8460-4793-8753-a34ee4c0f0c5-kube-api-access-8jpj4\") on node \"crc\" DevicePath \"\"" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.710032 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50cdd34b-8460-4793-8753-a34ee4c0f0c5" (UID: "50cdd34b-8460-4793-8753-a34ee4c0f0c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.767527 4637 generic.go:334] "Generic (PLEG): container finished" podID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerID="58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68" exitCode=0 Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.767575 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk95q" event={"ID":"50cdd34b-8460-4793-8753-a34ee4c0f0c5","Type":"ContainerDied","Data":"58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68"} Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.767594 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lk95q" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.767618 4637 scope.go:117] "RemoveContainer" containerID="58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.767606 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk95q" event={"ID":"50cdd34b-8460-4793-8753-a34ee4c0f0c5","Type":"ContainerDied","Data":"048c031a66b85154ccc3048df8394b085a4937eed8736bdd7c472ef49f70a20a"} Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.782063 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cdd34b-8460-4793-8753-a34ee4c0f0c5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.785785 4637 scope.go:117] "RemoveContainer" containerID="3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.807233 4637 scope.go:117] "RemoveContainer" containerID="63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.821975 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lk95q"] Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.830591 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lk95q"] Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.873374 4637 scope.go:117] "RemoveContainer" containerID="58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68" Dec 01 16:11:50 crc kubenswrapper[4637]: E1201 16:11:50.876038 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68\": container with ID starting with 58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68 not found: ID does not exist" containerID="58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.876082 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68"} err="failed to get container status \"58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68\": rpc error: code = NotFound desc = could not find container \"58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68\": container with ID starting with 58bd4cbf5399debb3b8dd01ccaa20e8cd42be844d859b3d929aba604f652ae68 not found: ID does not exist" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.876112 4637 scope.go:117] "RemoveContainer" containerID="3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d" Dec 01 16:11:50 crc kubenswrapper[4637]: E1201 16:11:50.877857 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d\": container with ID starting with 3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d not found: ID does not exist" containerID="3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.877908 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d"} err="failed to get container status \"3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d\": rpc error: code = NotFound desc = could not find container \"3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d\": container with ID starting with 3763b034f05c1ff8fff855ef6ee85f29d64dfd0b3a64c2e1d36f3c4370a43f9d not found: ID does not exist" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.878284 4637 scope.go:117] "RemoveContainer" containerID="63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b" Dec 01 16:11:50 crc kubenswrapper[4637]: E1201 16:11:50.878908 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b\": container with ID starting with 63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b not found: ID does not exist" containerID="63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b" Dec 01 16:11:50 crc kubenswrapper[4637]: I1201 16:11:50.878995 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b"} err="failed to get container status \"63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b\": rpc error: code = NotFound desc = could not find container \"63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b\": container with ID starting with 63a80bd1d65d46819c3ece97bdfa6ac621569e54a677b4fa514d1fb7d54d085b not found: ID does not exist" Dec 01 16:11:51 crc kubenswrapper[4637]: I1201 16:11:51.279331 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:51 crc kubenswrapper[4637]: I1201 16:11:51.279384 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:51 crc kubenswrapper[4637]: I1201 16:11:51.324825 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:11:51 crc kubenswrapper[4637]: I1201 16:11:51.783739 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" path="/var/lib/kubelet/pods/50cdd34b-8460-4793-8753-a34ee4c0f0c5/volumes" Dec 01 16:11:58 crc kubenswrapper[4637]: I1201 16:11:58.771764 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:11:58 crc kubenswrapper[4637]: E1201 16:11:58.772675 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:12:01 crc kubenswrapper[4637]: I1201 16:12:01.330885 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:12:06 crc kubenswrapper[4637]: I1201 16:12:06.294279 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64whm"] Dec 01 16:12:06 crc kubenswrapper[4637]: I1201 16:12:06.295362 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-64whm" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerName="registry-server" containerID="cri-o://21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622" gracePeriod=2 Dec 01 16:12:06 crc kubenswrapper[4637]: I1201 16:12:06.869277 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:12:06 crc kubenswrapper[4637]: I1201 16:12:06.966924 4637 generic.go:334] "Generic (PLEG): container finished" podID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerID="21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622" exitCode=0 Dec 01 16:12:06 crc kubenswrapper[4637]: I1201 16:12:06.967546 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64whm" Dec 01 16:12:06 crc kubenswrapper[4637]: I1201 16:12:06.968145 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64whm" event={"ID":"e506e258-4906-40ee-830e-ef41f79f0dfa","Type":"ContainerDied","Data":"21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622"} Dec 01 16:12:06 crc kubenswrapper[4637]: I1201 16:12:06.968190 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64whm" event={"ID":"e506e258-4906-40ee-830e-ef41f79f0dfa","Type":"ContainerDied","Data":"77ec140503a11256f7eac2a034676e6106d9a2c9f414c869f064c5cd7c78e2d9"} Dec 01 16:12:06 crc kubenswrapper[4637]: I1201 16:12:06.968208 4637 scope.go:117] "RemoveContainer" containerID="21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.003784 4637 scope.go:117] "RemoveContainer" containerID="3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.027428 4637 scope.go:117] "RemoveContainer" containerID="6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.044709 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sljdk\" (UniqueName: \"kubernetes.io/projected/e506e258-4906-40ee-830e-ef41f79f0dfa-kube-api-access-sljdk\") pod \"e506e258-4906-40ee-830e-ef41f79f0dfa\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.044975 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-catalog-content\") pod \"e506e258-4906-40ee-830e-ef41f79f0dfa\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.045205 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-utilities\") pod \"e506e258-4906-40ee-830e-ef41f79f0dfa\" (UID: \"e506e258-4906-40ee-830e-ef41f79f0dfa\") " Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.045857 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-utilities" (OuterVolumeSpecName: "utilities") pod "e506e258-4906-40ee-830e-ef41f79f0dfa" (UID: "e506e258-4906-40ee-830e-ef41f79f0dfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.046197 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.052072 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e506e258-4906-40ee-830e-ef41f79f0dfa-kube-api-access-sljdk" (OuterVolumeSpecName: "kube-api-access-sljdk") pod "e506e258-4906-40ee-830e-ef41f79f0dfa" (UID: "e506e258-4906-40ee-830e-ef41f79f0dfa"). InnerVolumeSpecName "kube-api-access-sljdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.062854 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e506e258-4906-40ee-830e-ef41f79f0dfa" (UID: "e506e258-4906-40ee-830e-ef41f79f0dfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.118957 4637 scope.go:117] "RemoveContainer" containerID="21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622" Dec 01 16:12:07 crc kubenswrapper[4637]: E1201 16:12:07.119386 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622\": container with ID starting with 21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622 not found: ID does not exist" containerID="21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.119419 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622"} err="failed to get container status \"21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622\": rpc error: code = NotFound desc = could not find container \"21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622\": container with ID starting with 21c5daa56112579cd8549ceed73e6dc7ee3320c5d167b5dd321dc5063afdf622 not found: ID does not exist" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.119439 4637 scope.go:117] "RemoveContainer" containerID="3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3" Dec 01 16:12:07 crc kubenswrapper[4637]: E1201 16:12:07.119905 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3\": container with ID starting with 3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3 not found: ID does not exist" containerID="3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.120087 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3"} err="failed to get container status \"3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3\": rpc error: code = NotFound desc = could not find container \"3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3\": container with ID starting with 3673853b3f9c1de1b0b7d21deea0a9cd3bc2ef50c23f73e2a78b3414555c67d3 not found: ID does not exist" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.120200 4637 scope.go:117] "RemoveContainer" containerID="6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64" Dec 01 16:12:07 crc kubenswrapper[4637]: E1201 16:12:07.120681 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64\": container with ID starting with 6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64 not found: ID does not exist" containerID="6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.120703 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64"} err="failed to get container status \"6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64\": rpc error: code = NotFound desc = could not find container \"6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64\": container with ID starting with 6fbe8ec6ec89ac516732ef7794543b1d26fe8b2294137cb69ee906b3279fbe64 not found: ID does not exist" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.148563 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e506e258-4906-40ee-830e-ef41f79f0dfa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.148601 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sljdk\" (UniqueName: \"kubernetes.io/projected/e506e258-4906-40ee-830e-ef41f79f0dfa-kube-api-access-sljdk\") on node \"crc\" DevicePath \"\"" Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.311157 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64whm"] Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.319642 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-64whm"] Dec 01 16:12:07 crc kubenswrapper[4637]: I1201 16:12:07.781850 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" path="/var/lib/kubelet/pods/e506e258-4906-40ee-830e-ef41f79f0dfa/volumes" Dec 01 16:12:09 crc kubenswrapper[4637]: I1201 16:12:09.775901 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:12:09 crc kubenswrapper[4637]: E1201 16:12:09.776769 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:12:21 crc kubenswrapper[4637]: I1201 16:12:21.771171 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:12:21 crc kubenswrapper[4637]: E1201 16:12:21.772085 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:12:33 crc kubenswrapper[4637]: I1201 16:12:33.772762 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:12:33 crc kubenswrapper[4637]: E1201 16:12:33.773758 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:12:41 crc kubenswrapper[4637]: I1201 16:12:41.319650 4637 generic.go:334] "Generic (PLEG): container finished" podID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerID="c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3" exitCode=0 Dec 01 16:12:41 crc kubenswrapper[4637]: I1201 16:12:41.319754 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whvdq/must-gather-jlzmv" event={"ID":"423f70f5-df78-4323-b374-2c7a5eb93b31","Type":"ContainerDied","Data":"c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3"} Dec 01 16:12:41 crc kubenswrapper[4637]: I1201 16:12:41.321378 4637 scope.go:117] "RemoveContainer" containerID="c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3" Dec 01 16:12:42 crc kubenswrapper[4637]: I1201 16:12:42.202616 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-whvdq_must-gather-jlzmv_423f70f5-df78-4323-b374-2c7a5eb93b31/gather/0.log" Dec 01 16:12:47 crc kubenswrapper[4637]: I1201 16:12:47.771300 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:12:47 crc kubenswrapper[4637]: E1201 16:12:47.772323 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:12:55 crc kubenswrapper[4637]: I1201 16:12:55.611520 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-whvdq/must-gather-jlzmv"] Dec 01 16:12:55 crc kubenswrapper[4637]: I1201 16:12:55.614572 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-whvdq/must-gather-jlzmv" podUID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerName="copy" containerID="cri-o://2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8" gracePeriod=2 Dec 01 16:12:55 crc kubenswrapper[4637]: I1201 16:12:55.635979 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-whvdq/must-gather-jlzmv"] Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.119961 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-whvdq_must-gather-jlzmv_423f70f5-df78-4323-b374-2c7a5eb93b31/copy/0.log" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.120948 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.257843 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbsls\" (UniqueName: \"kubernetes.io/projected/423f70f5-df78-4323-b374-2c7a5eb93b31-kube-api-access-zbsls\") pod \"423f70f5-df78-4323-b374-2c7a5eb93b31\" (UID: \"423f70f5-df78-4323-b374-2c7a5eb93b31\") " Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.258279 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/423f70f5-df78-4323-b374-2c7a5eb93b31-must-gather-output\") pod \"423f70f5-df78-4323-b374-2c7a5eb93b31\" (UID: \"423f70f5-df78-4323-b374-2c7a5eb93b31\") " Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.267724 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423f70f5-df78-4323-b374-2c7a5eb93b31-kube-api-access-zbsls" (OuterVolumeSpecName: "kube-api-access-zbsls") pod "423f70f5-df78-4323-b374-2c7a5eb93b31" (UID: "423f70f5-df78-4323-b374-2c7a5eb93b31"). InnerVolumeSpecName "kube-api-access-zbsls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.360752 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbsls\" (UniqueName: \"kubernetes.io/projected/423f70f5-df78-4323-b374-2c7a5eb93b31-kube-api-access-zbsls\") on node \"crc\" DevicePath \"\"" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.438740 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423f70f5-df78-4323-b374-2c7a5eb93b31-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "423f70f5-df78-4323-b374-2c7a5eb93b31" (UID: "423f70f5-df78-4323-b374-2c7a5eb93b31"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.464714 4637 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/423f70f5-df78-4323-b374-2c7a5eb93b31-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.468954 4637 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-whvdq_must-gather-jlzmv_423f70f5-df78-4323-b374-2c7a5eb93b31/copy/0.log" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.469304 4637 generic.go:334] "Generic (PLEG): container finished" podID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerID="2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8" exitCode=143 Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.469382 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whvdq/must-gather-jlzmv" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.469386 4637 scope.go:117] "RemoveContainer" containerID="2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.497333 4637 scope.go:117] "RemoveContainer" containerID="c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.577583 4637 scope.go:117] "RemoveContainer" containerID="2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8" Dec 01 16:12:56 crc kubenswrapper[4637]: E1201 16:12:56.578106 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8\": container with ID starting with 2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8 not found: ID does not exist" containerID="2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.578173 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8"} err="failed to get container status \"2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8\": rpc error: code = NotFound desc = could not find container \"2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8\": container with ID starting with 2827f44126455436da431ea422a9fb190ac0f05e3b86ae9ee0c01dfa1b6ad3e8 not found: ID does not exist" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.578204 4637 scope.go:117] "RemoveContainer" containerID="c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3" Dec 01 16:12:56 crc kubenswrapper[4637]: E1201 16:12:56.578777 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3\": container with ID starting with c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3 not found: ID does not exist" containerID="c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3" Dec 01 16:12:56 crc kubenswrapper[4637]: I1201 16:12:56.578819 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3"} err="failed to get container status \"c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3\": rpc error: code = NotFound desc = could not find container \"c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3\": container with ID starting with c1490ba562c717c4fb02dc2d91ad265b7f90d7c7f54d4cf32116c37ad000a5f3 not found: ID does not exist" Dec 01 16:12:57 crc kubenswrapper[4637]: I1201 16:12:57.782332 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423f70f5-df78-4323-b374-2c7a5eb93b31" path="/var/lib/kubelet/pods/423f70f5-df78-4323-b374-2c7a5eb93b31/volumes" Dec 01 16:13:01 crc kubenswrapper[4637]: I1201 16:13:01.771466 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:13:01 crc kubenswrapper[4637]: E1201 16:13:01.772322 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:13:14 crc kubenswrapper[4637]: I1201 16:13:14.772762 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:13:14 crc kubenswrapper[4637]: E1201 16:13:14.773736 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:13:29 crc kubenswrapper[4637]: I1201 16:13:29.786136 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:13:29 crc kubenswrapper[4637]: E1201 16:13:29.787339 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:13:36 crc kubenswrapper[4637]: I1201 16:13:36.086713 4637 scope.go:117] "RemoveContainer" containerID="7b643ebed751a6122230bda030df19d8d3775968cf5785f413cbe2f97a41644e" Dec 01 16:13:36 crc kubenswrapper[4637]: I1201 16:13:36.112231 4637 scope.go:117] "RemoveContainer" containerID="261d7b22e97363f6940daaab35b507c4e48d36618a5bda4015dc43c7ee136514" Dec 01 16:13:44 crc kubenswrapper[4637]: I1201 16:13:44.771529 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:13:44 crc kubenswrapper[4637]: E1201 16:13:44.772527 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:13:59 crc kubenswrapper[4637]: I1201 16:13:59.783232 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:13:59 crc kubenswrapper[4637]: E1201 16:13:59.784228 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.674410 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8vd4"] Dec 01 16:14:02 crc kubenswrapper[4637]: E1201 16:14:02.676420 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerName="registry-server" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.676516 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerName="registry-server" Dec 01 16:14:02 crc kubenswrapper[4637]: E1201 16:14:02.676598 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerName="extract-content" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.676678 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerName="extract-content" Dec 01 16:14:02 crc kubenswrapper[4637]: E1201 16:14:02.676752 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerName="extract-content" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.676826 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerName="extract-content" Dec 01 16:14:02 crc kubenswrapper[4637]: E1201 16:14:02.676907 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerName="extract-utilities" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.677003 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerName="extract-utilities" Dec 01 16:14:02 crc kubenswrapper[4637]: E1201 16:14:02.677079 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerName="gather" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.677145 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerName="gather" Dec 01 16:14:02 crc kubenswrapper[4637]: E1201 16:14:02.677213 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerName="copy" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.677282 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerName="copy" Dec 01 16:14:02 crc kubenswrapper[4637]: E1201 16:14:02.677373 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerName="registry-server" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.677441 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerName="registry-server" Dec 01 16:14:02 crc kubenswrapper[4637]: E1201 16:14:02.677523 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerName="extract-utilities" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.677590 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerName="extract-utilities" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.677888 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerName="copy" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.678026 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="e506e258-4906-40ee-830e-ef41f79f0dfa" containerName="registry-server" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.678153 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="50cdd34b-8460-4793-8753-a34ee4c0f0c5" containerName="registry-server" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.678226 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="423f70f5-df78-4323-b374-2c7a5eb93b31" containerName="gather" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.680039 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.700282 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8vd4"] Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.792310 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbhhv\" (UniqueName: \"kubernetes.io/projected/478661fa-1a31-4bbc-9566-541a84e1ccfc-kube-api-access-vbhhv\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.792704 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478661fa-1a31-4bbc-9566-541a84e1ccfc-catalog-content\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.792836 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478661fa-1a31-4bbc-9566-541a84e1ccfc-utilities\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.894879 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbhhv\" (UniqueName: \"kubernetes.io/projected/478661fa-1a31-4bbc-9566-541a84e1ccfc-kube-api-access-vbhhv\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.895231 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478661fa-1a31-4bbc-9566-541a84e1ccfc-catalog-content\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.895269 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478661fa-1a31-4bbc-9566-541a84e1ccfc-utilities\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.898615 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478661fa-1a31-4bbc-9566-541a84e1ccfc-catalog-content\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.899053 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478661fa-1a31-4bbc-9566-541a84e1ccfc-utilities\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:02 crc kubenswrapper[4637]: I1201 16:14:02.920122 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbhhv\" (UniqueName: \"kubernetes.io/projected/478661fa-1a31-4bbc-9566-541a84e1ccfc-kube-api-access-vbhhv\") pod \"community-operators-m8vd4\" (UID: \"478661fa-1a31-4bbc-9566-541a84e1ccfc\") " pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:03 crc kubenswrapper[4637]: I1201 16:14:03.028167 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:03 crc kubenswrapper[4637]: I1201 16:14:03.582853 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8vd4"] Dec 01 16:14:04 crc kubenswrapper[4637]: I1201 16:14:04.139541 4637 generic.go:334] "Generic (PLEG): container finished" podID="478661fa-1a31-4bbc-9566-541a84e1ccfc" containerID="d0da46699e0b607e35a5130578a5872f80eb25ae25db6225e5d9e7f3d17fa590" exitCode=0 Dec 01 16:14:04 crc kubenswrapper[4637]: I1201 16:14:04.139605 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8vd4" event={"ID":"478661fa-1a31-4bbc-9566-541a84e1ccfc","Type":"ContainerDied","Data":"d0da46699e0b607e35a5130578a5872f80eb25ae25db6225e5d9e7f3d17fa590"} Dec 01 16:14:04 crc kubenswrapper[4637]: I1201 16:14:04.139919 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8vd4" event={"ID":"478661fa-1a31-4bbc-9566-541a84e1ccfc","Type":"ContainerStarted","Data":"0d17edab5e1f807768c79982ca6f51b1cf2e8b0962363fedd9a214cb4deb8f73"} Dec 01 16:14:10 crc kubenswrapper[4637]: I1201 16:14:10.200268 4637 generic.go:334] "Generic (PLEG): container finished" podID="478661fa-1a31-4bbc-9566-541a84e1ccfc" containerID="6b1b9df2adc660c36aa99c00cab186876eabd59ac4750f1d5479598731eab629" exitCode=0 Dec 01 16:14:10 crc kubenswrapper[4637]: I1201 16:14:10.200318 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8vd4" event={"ID":"478661fa-1a31-4bbc-9566-541a84e1ccfc","Type":"ContainerDied","Data":"6b1b9df2adc660c36aa99c00cab186876eabd59ac4750f1d5479598731eab629"} Dec 01 16:14:11 crc kubenswrapper[4637]: I1201 16:14:11.211315 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8vd4" event={"ID":"478661fa-1a31-4bbc-9566-541a84e1ccfc","Type":"ContainerStarted","Data":"fa37bac51bbfa9f0f05d93714e3afc0aabef54eb89dc84534b029608b6f03b51"} Dec 01 16:14:11 crc kubenswrapper[4637]: I1201 16:14:11.243262 4637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8vd4" podStartSLOduration=2.698818996 podStartE2EDuration="9.243057624s" podCreationTimestamp="2025-12-01 16:14:02 +0000 UTC" firstStartedPulling="2025-12-01 16:14:04.143087427 +0000 UTC m=+5294.660796255" lastFinishedPulling="2025-12-01 16:14:10.687326055 +0000 UTC m=+5301.205034883" observedRunningTime="2025-12-01 16:14:11.233828824 +0000 UTC m=+5301.751537652" watchObservedRunningTime="2025-12-01 16:14:11.243057624 +0000 UTC m=+5301.760766452" Dec 01 16:14:13 crc kubenswrapper[4637]: I1201 16:14:13.028450 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:13 crc kubenswrapper[4637]: I1201 16:14:13.028833 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:13 crc kubenswrapper[4637]: I1201 16:14:13.076070 4637 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:13 crc kubenswrapper[4637]: I1201 16:14:13.771710 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:14:13 crc kubenswrapper[4637]: E1201 16:14:13.772411 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.094351 4637 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8vd4" Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.175193 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8vd4"] Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.233760 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gftc2"] Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.234395 4637 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gftc2" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="registry-server" containerID="cri-o://f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4" gracePeriod=2 Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.795621 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gftc2" Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.892101 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-catalog-content\") pod \"d8dea35a-d142-4d51-9045-eba9f8449490\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.892181 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-utilities\") pod \"d8dea35a-d142-4d51-9045-eba9f8449490\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.892217 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ctzg\" (UniqueName: \"kubernetes.io/projected/d8dea35a-d142-4d51-9045-eba9f8449490-kube-api-access-7ctzg\") pod \"d8dea35a-d142-4d51-9045-eba9f8449490\" (UID: \"d8dea35a-d142-4d51-9045-eba9f8449490\") " Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.895508 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-utilities" (OuterVolumeSpecName: "utilities") pod "d8dea35a-d142-4d51-9045-eba9f8449490" (UID: "d8dea35a-d142-4d51-9045-eba9f8449490"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.920214 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8dea35a-d142-4d51-9045-eba9f8449490-kube-api-access-7ctzg" (OuterVolumeSpecName: "kube-api-access-7ctzg") pod "d8dea35a-d142-4d51-9045-eba9f8449490" (UID: "d8dea35a-d142-4d51-9045-eba9f8449490"). InnerVolumeSpecName "kube-api-access-7ctzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.974349 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8dea35a-d142-4d51-9045-eba9f8449490" (UID: "d8dea35a-d142-4d51-9045-eba9f8449490"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.996257 4637 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.996301 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ctzg\" (UniqueName: \"kubernetes.io/projected/d8dea35a-d142-4d51-9045-eba9f8449490-kube-api-access-7ctzg\") on node \"crc\" DevicePath \"\"" Dec 01 16:14:23 crc kubenswrapper[4637]: I1201 16:14:23.996310 4637 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8dea35a-d142-4d51-9045-eba9f8449490-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.335032 4637 generic.go:334] "Generic (PLEG): container finished" podID="d8dea35a-d142-4d51-9045-eba9f8449490" containerID="f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4" exitCode=0 Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.335075 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gftc2" event={"ID":"d8dea35a-d142-4d51-9045-eba9f8449490","Type":"ContainerDied","Data":"f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4"} Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.335101 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gftc2" event={"ID":"d8dea35a-d142-4d51-9045-eba9f8449490","Type":"ContainerDied","Data":"601fe9f829343b5828e945f5f23b059c9b461ae4ab1e01de5f2870012393e30b"} Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.335118 4637 scope.go:117] "RemoveContainer" containerID="f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.335240 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gftc2" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.369316 4637 scope.go:117] "RemoveContainer" containerID="24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.386801 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gftc2"] Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.395229 4637 scope.go:117] "RemoveContainer" containerID="85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.399834 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gftc2"] Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.458584 4637 scope.go:117] "RemoveContainer" containerID="f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4" Dec 01 16:14:24 crc kubenswrapper[4637]: E1201 16:14:24.459301 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4\": container with ID starting with f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4 not found: ID does not exist" containerID="f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.459347 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4"} err="failed to get container status \"f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4\": rpc error: code = NotFound desc = could not find container \"f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4\": container with ID starting with f2df1e342ea4424b14cd00aa98745d3aa15e52c81fc12c7c900e8b67e208bcf4 not found: ID does not exist" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.459378 4637 scope.go:117] "RemoveContainer" containerID="24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd" Dec 01 16:14:24 crc kubenswrapper[4637]: E1201 16:14:24.459863 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd\": container with ID starting with 24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd not found: ID does not exist" containerID="24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.459899 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd"} err="failed to get container status \"24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd\": rpc error: code = NotFound desc = could not find container \"24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd\": container with ID starting with 24ff1032e41a2927d5355731718098da8524c674d2acc3c596cafaae56717edd not found: ID does not exist" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.459950 4637 scope.go:117] "RemoveContainer" containerID="85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c" Dec 01 16:14:24 crc kubenswrapper[4637]: E1201 16:14:24.460922 4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c\": container with ID starting with 85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c not found: ID does not exist" containerID="85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.460971 4637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c"} err="failed to get container status \"85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c\": rpc error: code = NotFound desc = could not find container \"85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c\": container with ID starting with 85d0f1048663ed7a0b3d7a3aa639e02e19d6cb3d11000aa11caacdea6154d43c not found: ID does not exist" Dec 01 16:14:24 crc kubenswrapper[4637]: I1201 16:14:24.771163 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:14:24 crc kubenswrapper[4637]: E1201 16:14:24.771851 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:14:25 crc kubenswrapper[4637]: I1201 16:14:25.783504 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" path="/var/lib/kubelet/pods/d8dea35a-d142-4d51-9045-eba9f8449490/volumes" Dec 01 16:14:36 crc kubenswrapper[4637]: I1201 16:14:36.772147 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:14:36 crc kubenswrapper[4637]: E1201 16:14:36.773169 4637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7rjd_openshift-machine-config-operator(2db6c86b-ff8c-4746-9c91-7dac0498c0b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" podUID="2db6c86b-ff8c-4746-9c91-7dac0498c0b9" Dec 01 16:14:49 crc kubenswrapper[4637]: I1201 16:14:49.782968 4637 scope.go:117] "RemoveContainer" containerID="61e9e3cc3cf3f25e61fab846e77f72437358512e5cdd973575dee568530fb7fd" Dec 01 16:14:50 crc kubenswrapper[4637]: I1201 16:14:50.584489 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7rjd" event={"ID":"2db6c86b-ff8c-4746-9c91-7dac0498c0b9","Type":"ContainerStarted","Data":"f59bed81f518c55dc2bc7f3dbb83fbecf8ffedc557da6918b5748836a7303e63"} Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.192688 4637 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv"] Dec 01 16:15:00 crc kubenswrapper[4637]: E1201 16:15:00.193993 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="registry-server" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.194011 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="registry-server" Dec 01 16:15:00 crc kubenswrapper[4637]: E1201 16:15:00.194025 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="extract-content" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.194031 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="extract-content" Dec 01 16:15:00 crc kubenswrapper[4637]: E1201 16:15:00.194043 4637 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="extract-utilities" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.194052 4637 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="extract-utilities" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.194259 4637 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8dea35a-d142-4d51-9045-eba9f8449490" containerName="registry-server" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.195067 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.196861 4637 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.201588 4637 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.206213 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv"] Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.257771 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvkz8\" (UniqueName: \"kubernetes.io/projected/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-kube-api-access-bvkz8\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.257844 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-config-volume\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.257885 4637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-secret-volume\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.361284 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvkz8\" (UniqueName: \"kubernetes.io/projected/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-kube-api-access-bvkz8\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.361798 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-config-volume\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.362140 4637 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-secret-volume\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.362662 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-config-volume\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.369532 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-secret-volume\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.385833 4637 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvkz8\" (UniqueName: \"kubernetes.io/projected/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-kube-api-access-bvkz8\") pod \"collect-profiles-29410095-f82mv\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.522024 4637 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:00 crc kubenswrapper[4637]: I1201 16:15:00.962388 4637 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv"] Dec 01 16:15:01 crc kubenswrapper[4637]: I1201 16:15:01.688176 4637 generic.go:334] "Generic (PLEG): container finished" podID="8551f315-dcb9-4ed0-9d1d-e3038b08cab2" containerID="189d6e1b1fb859ee0278bbdccbbf0e2cbf2822cc9b077717f91b17310d010e12" exitCode=0 Dec 01 16:15:01 crc kubenswrapper[4637]: I1201 16:15:01.688519 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" event={"ID":"8551f315-dcb9-4ed0-9d1d-e3038b08cab2","Type":"ContainerDied","Data":"189d6e1b1fb859ee0278bbdccbbf0e2cbf2822cc9b077717f91b17310d010e12"} Dec 01 16:15:01 crc kubenswrapper[4637]: I1201 16:15:01.688546 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" event={"ID":"8551f315-dcb9-4ed0-9d1d-e3038b08cab2","Type":"ContainerStarted","Data":"2f1ba40e70505893672a48faf8f87e7ac4f4c7d79edee83b55a993ea02ba9809"} Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.052796 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.222106 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-config-volume\") pod \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.222335 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvkz8\" (UniqueName: \"kubernetes.io/projected/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-kube-api-access-bvkz8\") pod \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.222498 4637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-secret-volume\") pod \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\" (UID: \"8551f315-dcb9-4ed0-9d1d-e3038b08cab2\") " Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.222676 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-config-volume" (OuterVolumeSpecName: "config-volume") pod "8551f315-dcb9-4ed0-9d1d-e3038b08cab2" (UID: "8551f315-dcb9-4ed0-9d1d-e3038b08cab2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.223740 4637 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.228460 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8551f315-dcb9-4ed0-9d1d-e3038b08cab2" (UID: "8551f315-dcb9-4ed0-9d1d-e3038b08cab2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.233108 4637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-kube-api-access-bvkz8" (OuterVolumeSpecName: "kube-api-access-bvkz8") pod "8551f315-dcb9-4ed0-9d1d-e3038b08cab2" (UID: "8551f315-dcb9-4ed0-9d1d-e3038b08cab2"). InnerVolumeSpecName "kube-api-access-bvkz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.326179 4637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvkz8\" (UniqueName: \"kubernetes.io/projected/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-kube-api-access-bvkz8\") on node \"crc\" DevicePath \"\"" Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.326225 4637 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8551f315-dcb9-4ed0-9d1d-e3038b08cab2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.706755 4637 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" event={"ID":"8551f315-dcb9-4ed0-9d1d-e3038b08cab2","Type":"ContainerDied","Data":"2f1ba40e70505893672a48faf8f87e7ac4f4c7d79edee83b55a993ea02ba9809"} Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.706797 4637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f1ba40e70505893672a48faf8f87e7ac4f4c7d79edee83b55a993ea02ba9809" Dec 01 16:15:03 crc kubenswrapper[4637]: I1201 16:15:03.706857 4637 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29410095-f82mv" Dec 01 16:15:04 crc kubenswrapper[4637]: I1201 16:15:04.127259 4637 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7"] Dec 01 16:15:04 crc kubenswrapper[4637]: I1201 16:15:04.136222 4637 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29410050-dnlj7"] Dec 01 16:15:05 crc kubenswrapper[4637]: I1201 16:15:05.785006 4637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ab58bb-0724-43fc-b2c9-726f4090bd35" path="/var/lib/kubelet/pods/40ab58bb-0724-43fc-b2c9-726f4090bd35/volumes" Dec 01 16:15:36 crc kubenswrapper[4637]: I1201 16:15:36.274356 4637 scope.go:117] "RemoveContainer" containerID="7eec05aecbc7a079cc7e4908b49a335b45eede94a929ca124b98feb8b1d6f380"